00:00:00.001 Started by upstream project "autotest-per-patch" build number 126264 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.105 The recommended git tool is: git 00:00:00.105 using credential 00000000-0000-0000-0000-000000000002 00:00:00.108 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.167 Fetching changes from the remote Git repository 00:00:00.169 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.217 Using shallow fetch with depth 1 00:00:00.217 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.217 > git --version # timeout=10 00:00:00.255 > git --version # 'git version 2.39.2' 00:00:00.255 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.275 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.275 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.482 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.495 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.508 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.508 > git config core.sparsecheckout # timeout=10 00:00:06.521 > git read-tree -mu HEAD # timeout=10 00:00:06.539 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.564 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.565 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.683 [Pipeline] Start of Pipeline 00:00:06.697 [Pipeline] library 00:00:06.699 Loading library shm_lib@master 00:00:06.699 Library shm_lib@master is cached. Copying from home. 00:00:06.715 [Pipeline] node 00:00:06.722 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.723 [Pipeline] { 00:00:06.731 [Pipeline] catchError 00:00:06.732 [Pipeline] { 00:00:06.741 [Pipeline] wrap 00:00:06.749 [Pipeline] { 00:00:06.758 [Pipeline] stage 00:00:06.760 [Pipeline] { (Prologue) 00:00:06.974 [Pipeline] sh 00:00:07.253 + logger -p user.info -t JENKINS-CI 00:00:07.274 [Pipeline] echo 00:00:07.275 Node: WFP6 00:00:07.282 [Pipeline] sh 00:00:07.579 [Pipeline] setCustomBuildProperty 00:00:07.590 [Pipeline] echo 00:00:07.590 Cleanup processes 00:00:07.595 [Pipeline] sh 00:00:07.872 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.873 3095500 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.886 [Pipeline] sh 00:00:08.168 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.168 ++ grep -v 'sudo pgrep' 00:00:08.168 ++ awk '{print $1}' 00:00:08.168 + sudo kill -9 00:00:08.168 + true 00:00:08.183 [Pipeline] cleanWs 00:00:08.193 [WS-CLEANUP] Deleting project workspace... 00:00:08.193 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.198 [WS-CLEANUP] done 00:00:08.204 [Pipeline] setCustomBuildProperty 00:00:08.221 [Pipeline] sh 00:00:08.501 + sudo git config --global --replace-all safe.directory '*' 00:00:08.591 [Pipeline] httpRequest 00:00:08.618 [Pipeline] echo 00:00:08.619 Sorcerer 10.211.164.101 is alive 00:00:08.628 [Pipeline] httpRequest 00:00:08.633 HttpMethod: GET 00:00:08.633 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.634 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.650 Response Code: HTTP/1.1 200 OK 00:00:08.651 Success: Status code 200 is in the accepted range: 200,404 00:00:08.651 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:12.587 [Pipeline] sh 00:00:12.869 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:12.888 [Pipeline] httpRequest 00:00:12.921 [Pipeline] echo 00:00:12.924 Sorcerer 10.211.164.101 is alive 00:00:12.933 [Pipeline] httpRequest 00:00:12.938 HttpMethod: GET 00:00:12.938 URL: http://10.211.164.101/packages/spdk_315cf04b687f14c3c82fc09dee409366211dfcff.tar.gz 00:00:12.939 Sending request to url: http://10.211.164.101/packages/spdk_315cf04b687f14c3c82fc09dee409366211dfcff.tar.gz 00:00:12.964 Response Code: HTTP/1.1 200 OK 00:00:12.964 Success: Status code 200 is in the accepted range: 200,404 00:00:12.965 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_315cf04b687f14c3c82fc09dee409366211dfcff.tar.gz 00:00:59.276 [Pipeline] sh 00:00:59.557 + tar --no-same-owner -xf spdk_315cf04b687f14c3c82fc09dee409366211dfcff.tar.gz 00:01:02.099 [Pipeline] sh 00:01:02.378 + git -C spdk log --oneline -n5 00:01:02.378 315cf04b6 bdev/nvme: populate socket_id 00:01:02.378 eed732c9a bdev: add socket_id to spdk_bdev 00:01:02.378 fd0bbcfdd fio/nvme: use socket_id when allocating io buffers 00:01:02.378 8c20d24e0 spdk_nvme_perf: allocate buffers from socket_id reported by ctrlr 00:01:02.378 e9e51ebfe nvme/pcie: allocate cq from device-local numa node's memory 00:01:02.390 [Pipeline] } 00:01:02.407 [Pipeline] // stage 00:01:02.416 [Pipeline] stage 00:01:02.418 [Pipeline] { (Prepare) 00:01:02.438 [Pipeline] writeFile 00:01:02.456 [Pipeline] sh 00:01:02.735 + logger -p user.info -t JENKINS-CI 00:01:02.748 [Pipeline] sh 00:01:03.026 + logger -p user.info -t JENKINS-CI 00:01:03.038 [Pipeline] sh 00:01:03.317 + cat autorun-spdk.conf 00:01:03.317 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.317 SPDK_TEST_NVMF=1 00:01:03.317 SPDK_TEST_NVME_CLI=1 00:01:03.317 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.317 SPDK_TEST_NVMF_NICS=e810 00:01:03.317 SPDK_TEST_VFIOUSER=1 00:01:03.317 SPDK_RUN_UBSAN=1 00:01:03.317 NET_TYPE=phy 00:01:03.324 RUN_NIGHTLY=0 00:01:03.329 [Pipeline] readFile 00:01:03.355 [Pipeline] withEnv 00:01:03.357 [Pipeline] { 00:01:03.369 [Pipeline] sh 00:01:03.649 + set -ex 00:01:03.649 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:03.649 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.649 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.649 ++ SPDK_TEST_NVMF=1 00:01:03.649 ++ SPDK_TEST_NVME_CLI=1 00:01:03.649 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.649 ++ SPDK_TEST_NVMF_NICS=e810 00:01:03.649 ++ SPDK_TEST_VFIOUSER=1 00:01:03.649 ++ SPDK_RUN_UBSAN=1 00:01:03.649 ++ NET_TYPE=phy 00:01:03.649 ++ RUN_NIGHTLY=0 00:01:03.649 + case $SPDK_TEST_NVMF_NICS in 00:01:03.649 + DRIVERS=ice 00:01:03.649 + [[ tcp == \r\d\m\a ]] 00:01:03.649 + [[ -n ice ]] 00:01:03.649 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:03.649 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:10.248 rmmod: ERROR: Module irdma is not currently loaded 00:01:10.248 rmmod: ERROR: Module i40iw is not currently loaded 00:01:10.248 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:10.248 + true 00:01:10.248 + for D in $DRIVERS 00:01:10.248 + sudo modprobe ice 00:01:10.248 + exit 0 00:01:10.259 [Pipeline] } 00:01:10.279 [Pipeline] // withEnv 00:01:10.283 [Pipeline] } 00:01:10.299 [Pipeline] // stage 00:01:10.309 [Pipeline] catchError 00:01:10.311 [Pipeline] { 00:01:10.325 [Pipeline] timeout 00:01:10.325 Timeout set to expire in 50 min 00:01:10.327 [Pipeline] { 00:01:10.340 [Pipeline] stage 00:01:10.342 [Pipeline] { (Tests) 00:01:10.354 [Pipeline] sh 00:01:10.634 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.634 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.634 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.634 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:10.634 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:10.634 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.634 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:10.634 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.634 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:10.634 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:10.634 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:10.634 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:10.634 + source /etc/os-release 00:01:10.634 ++ NAME='Fedora Linux' 00:01:10.634 ++ VERSION='38 (Cloud Edition)' 00:01:10.634 ++ ID=fedora 00:01:10.634 ++ VERSION_ID=38 00:01:10.634 ++ VERSION_CODENAME= 00:01:10.634 ++ PLATFORM_ID=platform:f38 00:01:10.634 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:10.634 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.634 ++ LOGO=fedora-logo-icon 00:01:10.634 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:10.634 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.634 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:10.634 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.634 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.634 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.634 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:10.635 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.635 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:10.635 ++ SUPPORT_END=2024-05-14 00:01:10.635 ++ VARIANT='Cloud Edition' 00:01:10.635 ++ VARIANT_ID=cloud 00:01:10.635 + uname -a 00:01:10.635 Linux spdk-wfp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:10.635 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:13.172 Hugepages 00:01:13.172 node hugesize free / total 00:01:13.172 node0 1048576kB 0 / 0 00:01:13.172 node0 2048kB 0 / 0 00:01:13.172 node1 1048576kB 0 / 0 00:01:13.172 node1 2048kB 0 / 0 00:01:13.172 00:01:13.172 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.172 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:13.172 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:13.172 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:13.172 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:13.172 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:13.172 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:13.172 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:13.172 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:13.172 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:13.172 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:13.172 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:13.172 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:13.172 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:13.172 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:13.172 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:13.172 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:13.172 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:13.172 + rm -f /tmp/spdk-ld-path 00:01:13.172 + source autorun-spdk.conf 00:01:13.172 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.172 ++ SPDK_TEST_NVMF=1 00:01:13.172 ++ SPDK_TEST_NVME_CLI=1 00:01:13.172 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.172 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.172 ++ SPDK_TEST_VFIOUSER=1 00:01:13.172 ++ SPDK_RUN_UBSAN=1 00:01:13.172 ++ NET_TYPE=phy 00:01:13.172 ++ RUN_NIGHTLY=0 00:01:13.172 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.172 + [[ -n '' ]] 00:01:13.172 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.172 + for M in /var/spdk/build-*-manifest.txt 00:01:13.172 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:13.172 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.172 + for M in /var/spdk/build-*-manifest.txt 00:01:13.172 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:13.172 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:13.172 ++ uname 00:01:13.172 + [[ Linux == \L\i\n\u\x ]] 00:01:13.172 + sudo dmesg -T 00:01:13.172 + sudo dmesg --clear 00:01:13.172 + dmesg_pid=3096944 00:01:13.172 + [[ Fedora Linux == FreeBSD ]] 00:01:13.172 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.172 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.172 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.172 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.172 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.172 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.172 + sudo dmesg -Tw 00:01:13.172 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.172 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.172 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.172 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.172 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.172 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.172 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.172 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.172 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.172 Test configuration: 00:01:13.172 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.172 SPDK_TEST_NVMF=1 00:01:13.172 SPDK_TEST_NVME_CLI=1 00:01:13.172 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.172 SPDK_TEST_NVMF_NICS=e810 00:01:13.172 SPDK_TEST_VFIOUSER=1 00:01:13.172 SPDK_RUN_UBSAN=1 00:01:13.172 NET_TYPE=phy 00:01:13.432 RUN_NIGHTLY=0 01:06:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.432 01:06:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.432 01:06:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.432 01:06:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.432 01:06:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.432 01:06:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.432 01:06:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.432 01:06:39 -- paths/export.sh@5 -- $ export PATH 00:01:13.432 01:06:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.432 01:06:39 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.432 01:06:39 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:13.432 01:06:39 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721084799.XXXXXX 00:01:13.432 01:06:39 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721084799.0jaOPr 00:01:13.432 01:06:39 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:13.432 01:06:39 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:13.432 01:06:39 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:13.432 01:06:39 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.432 01:06:39 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.432 01:06:39 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:13.432 01:06:39 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:13.432 01:06:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.432 01:06:39 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:13.432 01:06:39 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:13.432 01:06:39 -- pm/common@17 -- $ local monitor 00:01:13.432 01:06:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.432 01:06:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.432 01:06:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.432 01:06:39 -- pm/common@21 -- $ date +%s 00:01:13.432 01:06:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.432 01:06:39 -- pm/common@21 -- $ date +%s 00:01:13.432 01:06:39 -- pm/common@21 -- $ date +%s 00:01:13.432 01:06:39 -- pm/common@25 -- $ sleep 1 00:01:13.432 01:06:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721084799 00:01:13.432 01:06:39 -- pm/common@21 -- $ date +%s 00:01:13.432 01:06:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721084799 00:01:13.432 01:06:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721084799 00:01:13.432 01:06:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721084799 00:01:13.432 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721084799_collect-cpu-load.pm.log 00:01:13.432 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721084799_collect-vmstat.pm.log 00:01:13.432 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721084799_collect-cpu-temp.pm.log 00:01:13.432 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721084799_collect-bmc-pm.bmc.pm.log 00:01:14.370 01:06:40 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:14.371 01:06:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.371 01:06:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.371 01:06:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.371 01:06:40 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.371 Mon Jul 15 11:06:40 PM UTC 2024 00:01:14.371 01:06:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.371 v24.09-pre-239-g315cf04b6 00:01:14.371 01:06:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.371 01:06:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.371 01:06:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.371 01:06:40 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:14.371 01:06:40 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:14.371 01:06:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.371 ************************************ 00:01:14.371 START TEST ubsan 00:01:14.371 ************************************ 00:01:14.371 01:06:40 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:14.371 using ubsan 00:01:14.371 00:01:14.371 real 0m0.000s 00:01:14.371 user 0m0.000s 00:01:14.371 sys 0m0.000s 00:01:14.371 01:06:40 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:14.371 01:06:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.371 ************************************ 00:01:14.371 END TEST ubsan 00:01:14.371 ************************************ 00:01:14.629 01:06:40 -- common/autotest_common.sh@1142 -- $ return 0 00:01:14.629 01:06:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.629 01:06:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.629 01:06:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.629 01:06:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.629 01:06:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.629 01:06:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.629 01:06:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.629 01:06:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.629 01:06:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:14.629 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:14.629 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:14.887 Using 'verbs' RDMA provider 00:01:28.022 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:37.989 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:38.576 Creating mk/config.mk...done. 00:01:38.576 Creating mk/cc.flags.mk...done. 00:01:38.576 Type 'make' to build. 00:01:38.576 01:07:04 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:38.576 01:07:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:38.576 01:07:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:38.576 01:07:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.576 ************************************ 00:01:38.576 START TEST make 00:01:38.576 ************************************ 00:01:38.576 01:07:04 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:38.833 make[1]: Nothing to be done for 'all'. 00:01:40.211 The Meson build system 00:01:40.211 Version: 1.3.1 00:01:40.211 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:40.211 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:40.211 Build type: native build 00:01:40.211 Project name: libvfio-user 00:01:40.211 Project version: 0.0.1 00:01:40.211 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:40.211 C linker for the host machine: cc ld.bfd 2.39-16 00:01:40.211 Host machine cpu family: x86_64 00:01:40.211 Host machine cpu: x86_64 00:01:40.211 Run-time dependency threads found: YES 00:01:40.211 Library dl found: YES 00:01:40.211 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:40.211 Run-time dependency json-c found: YES 0.17 00:01:40.211 Run-time dependency cmocka found: YES 1.1.7 00:01:40.211 Program pytest-3 found: NO 00:01:40.211 Program flake8 found: NO 00:01:40.211 Program misspell-fixer found: NO 00:01:40.211 Program restructuredtext-lint found: NO 00:01:40.211 Program valgrind found: YES (/usr/bin/valgrind) 00:01:40.211 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.211 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.211 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.211 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:40.211 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:40.211 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:40.211 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:40.211 Build targets in project: 8 00:01:40.211 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:40.211 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:40.211 00:01:40.211 libvfio-user 0.0.1 00:01:40.211 00:01:40.211 User defined options 00:01:40.211 buildtype : debug 00:01:40.211 default_library: shared 00:01:40.211 libdir : /usr/local/lib 00:01:40.211 00:01:40.211 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.468 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:40.725 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:40.725 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:40.725 [3/37] Compiling C object samples/null.p/null.c.o 00:01:40.725 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:40.725 [5/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:40.725 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:40.725 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:40.725 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:40.725 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:40.725 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:40.725 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:40.725 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:40.725 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:40.725 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:40.725 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:40.725 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:40.725 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:40.725 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:40.725 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:40.725 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:40.725 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:40.725 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:40.725 [23/37] Compiling C object samples/server.p/server.c.o 00:01:40.725 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:40.725 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:40.725 [26/37] Compiling C object samples/client.p/client.c.o 00:01:40.725 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:40.725 [28/37] Linking target samples/client 00:01:40.725 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:40.725 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:40.725 [31/37] Linking target test/unit_tests 00:01:40.983 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:40.983 [33/37] Linking target samples/server 00:01:40.983 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:40.983 [35/37] Linking target samples/gpio-pci-idio-16 00:01:40.983 [36/37] Linking target samples/null 00:01:40.983 [37/37] Linking target samples/lspci 00:01:40.983 INFO: autodetecting backend as ninja 00:01:40.983 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:40.983 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.548 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.548 ninja: no work to do. 00:01:46.816 The Meson build system 00:01:46.816 Version: 1.3.1 00:01:46.816 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:46.816 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:46.816 Build type: native build 00:01:46.816 Program cat found: YES (/usr/bin/cat) 00:01:46.816 Project name: DPDK 00:01:46.816 Project version: 24.03.0 00:01:46.816 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:46.816 C linker for the host machine: cc ld.bfd 2.39-16 00:01:46.816 Host machine cpu family: x86_64 00:01:46.816 Host machine cpu: x86_64 00:01:46.816 Message: ## Building in Developer Mode ## 00:01:46.816 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:46.816 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:46.816 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:46.816 Program python3 found: YES (/usr/bin/python3) 00:01:46.816 Program cat found: YES (/usr/bin/cat) 00:01:46.816 Compiler for C supports arguments -march=native: YES 00:01:46.816 Checking for size of "void *" : 8 00:01:46.816 Checking for size of "void *" : 8 (cached) 00:01:46.816 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:46.816 Library m found: YES 00:01:46.816 Library numa found: YES 00:01:46.816 Has header "numaif.h" : YES 00:01:46.816 Library fdt found: NO 00:01:46.816 Library execinfo found: NO 00:01:46.816 Has header "execinfo.h" : YES 00:01:46.816 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:46.816 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:46.816 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:46.816 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:46.816 Run-time dependency openssl found: YES 3.0.9 00:01:46.816 Run-time dependency libpcap found: YES 1.10.4 00:01:46.816 Has header "pcap.h" with dependency libpcap: YES 00:01:46.816 Compiler for C supports arguments -Wcast-qual: YES 00:01:46.816 Compiler for C supports arguments -Wdeprecated: YES 00:01:46.816 Compiler for C supports arguments -Wformat: YES 00:01:46.816 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:46.816 Compiler for C supports arguments -Wformat-security: NO 00:01:46.816 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.816 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:46.816 Compiler for C supports arguments -Wnested-externs: YES 00:01:46.816 Compiler for C supports arguments -Wold-style-definition: YES 00:01:46.816 Compiler for C supports arguments -Wpointer-arith: YES 00:01:46.816 Compiler for C supports arguments -Wsign-compare: YES 00:01:46.816 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:46.816 Compiler for C supports arguments -Wundef: YES 00:01:46.816 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.816 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:46.816 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:46.816 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.816 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:46.816 Program objdump found: YES (/usr/bin/objdump) 00:01:46.816 Compiler for C supports arguments -mavx512f: YES 00:01:46.816 Checking if "AVX512 checking" compiles: YES 00:01:46.816 Fetching value of define "__SSE4_2__" : 1 00:01:46.816 Fetching value of define "__AES__" : 1 00:01:46.816 Fetching value of define "__AVX__" : 1 00:01:46.816 Fetching value of define "__AVX2__" : 1 00:01:46.816 Fetching value of define "__AVX512BW__" : 1 00:01:46.816 Fetching value of define "__AVX512CD__" : 1 00:01:46.816 Fetching value of define "__AVX512DQ__" : 1 00:01:46.816 Fetching value of define "__AVX512F__" : 1 00:01:46.816 Fetching value of define "__AVX512VL__" : 1 00:01:46.816 Fetching value of define "__PCLMUL__" : 1 00:01:46.816 Fetching value of define "__RDRND__" : 1 00:01:46.816 Fetching value of define "__RDSEED__" : 1 00:01:46.816 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:46.816 Fetching value of define "__znver1__" : (undefined) 00:01:46.816 Fetching value of define "__znver2__" : (undefined) 00:01:46.816 Fetching value of define "__znver3__" : (undefined) 00:01:46.816 Fetching value of define "__znver4__" : (undefined) 00:01:46.816 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:46.816 Message: lib/log: Defining dependency "log" 00:01:46.816 Message: lib/kvargs: Defining dependency "kvargs" 00:01:46.816 Message: lib/telemetry: Defining dependency "telemetry" 00:01:46.816 Checking for function "getentropy" : NO 00:01:46.816 Message: lib/eal: Defining dependency "eal" 00:01:46.816 Message: lib/ring: Defining dependency "ring" 00:01:46.816 Message: lib/rcu: Defining dependency "rcu" 00:01:46.816 Message: lib/mempool: Defining dependency "mempool" 00:01:46.816 Message: lib/mbuf: Defining dependency "mbuf" 00:01:46.816 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:46.816 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.816 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.816 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:46.816 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:46.816 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:46.816 Compiler for C supports arguments -mpclmul: YES 00:01:46.816 Compiler for C supports arguments -maes: YES 00:01:46.816 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.816 Compiler for C supports arguments -mavx512bw: YES 00:01:46.816 Compiler for C supports arguments -mavx512dq: YES 00:01:46.816 Compiler for C supports arguments -mavx512vl: YES 00:01:46.816 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:46.816 Compiler for C supports arguments -mavx2: YES 00:01:46.816 Compiler for C supports arguments -mavx: YES 00:01:46.816 Message: lib/net: Defining dependency "net" 00:01:46.816 Message: lib/meter: Defining dependency "meter" 00:01:46.816 Message: lib/ethdev: Defining dependency "ethdev" 00:01:46.816 Message: lib/pci: Defining dependency "pci" 00:01:46.816 Message: lib/cmdline: Defining dependency "cmdline" 00:01:46.816 Message: lib/hash: Defining dependency "hash" 00:01:46.816 Message: lib/timer: Defining dependency "timer" 00:01:46.816 Message: lib/compressdev: Defining dependency "compressdev" 00:01:46.816 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:46.816 Message: lib/dmadev: Defining dependency "dmadev" 00:01:46.816 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:46.816 Message: lib/power: Defining dependency "power" 00:01:46.816 Message: lib/reorder: Defining dependency "reorder" 00:01:46.816 Message: lib/security: Defining dependency "security" 00:01:46.816 Has header "linux/userfaultfd.h" : YES 00:01:46.816 Has header "linux/vduse.h" : YES 00:01:46.816 Message: lib/vhost: Defining dependency "vhost" 00:01:46.816 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:46.816 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:46.816 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:46.816 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.816 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:46.816 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:46.816 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:46.816 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:46.816 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:46.816 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:46.816 Program doxygen found: YES (/usr/bin/doxygen) 00:01:46.816 Configuring doxy-api-html.conf using configuration 00:01:46.816 Configuring doxy-api-man.conf using configuration 00:01:46.816 Program mandb found: YES (/usr/bin/mandb) 00:01:46.816 Program sphinx-build found: NO 00:01:46.816 Configuring rte_build_config.h using configuration 00:01:46.816 Message: 00:01:46.816 ================= 00:01:46.816 Applications Enabled 00:01:46.816 ================= 00:01:46.816 00:01:46.816 apps: 00:01:46.816 00:01:46.816 00:01:46.816 Message: 00:01:46.816 ================= 00:01:46.816 Libraries Enabled 00:01:46.816 ================= 00:01:46.816 00:01:46.816 libs: 00:01:46.816 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:46.816 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:46.816 cryptodev, dmadev, power, reorder, security, vhost, 00:01:46.816 00:01:46.816 Message: 00:01:46.816 =============== 00:01:46.816 Drivers Enabled 00:01:46.816 =============== 00:01:46.816 00:01:46.816 common: 00:01:46.816 00:01:46.816 bus: 00:01:46.816 pci, vdev, 00:01:46.816 mempool: 00:01:46.816 ring, 00:01:46.816 dma: 00:01:46.816 00:01:46.816 net: 00:01:46.816 00:01:46.816 crypto: 00:01:46.816 00:01:46.816 compress: 00:01:46.816 00:01:46.816 vdpa: 00:01:46.816 00:01:46.816 00:01:46.816 Message: 00:01:46.816 ================= 00:01:46.816 Content Skipped 00:01:46.816 ================= 00:01:46.816 00:01:46.816 apps: 00:01:46.816 dumpcap: explicitly disabled via build config 00:01:46.816 graph: explicitly disabled via build config 00:01:46.816 pdump: explicitly disabled via build config 00:01:46.816 proc-info: explicitly disabled via build config 00:01:46.816 test-acl: explicitly disabled via build config 00:01:46.816 test-bbdev: explicitly disabled via build config 00:01:46.816 test-cmdline: explicitly disabled via build config 00:01:46.816 test-compress-perf: explicitly disabled via build config 00:01:46.817 test-crypto-perf: explicitly disabled via build config 00:01:46.817 test-dma-perf: explicitly disabled via build config 00:01:46.817 test-eventdev: explicitly disabled via build config 00:01:46.817 test-fib: explicitly disabled via build config 00:01:46.817 test-flow-perf: explicitly disabled via build config 00:01:46.817 test-gpudev: explicitly disabled via build config 00:01:46.817 test-mldev: explicitly disabled via build config 00:01:46.817 test-pipeline: explicitly disabled via build config 00:01:46.817 test-pmd: explicitly disabled via build config 00:01:46.817 test-regex: explicitly disabled via build config 00:01:46.817 test-sad: explicitly disabled via build config 00:01:46.817 test-security-perf: explicitly disabled via build config 00:01:46.817 00:01:46.817 libs: 00:01:46.817 argparse: explicitly disabled via build config 00:01:46.817 metrics: explicitly disabled via build config 00:01:46.817 acl: explicitly disabled via build config 00:01:46.817 bbdev: explicitly disabled via build config 00:01:46.817 bitratestats: explicitly disabled via build config 00:01:46.817 bpf: explicitly disabled via build config 00:01:46.817 cfgfile: explicitly disabled via build config 00:01:46.817 distributor: explicitly disabled via build config 00:01:46.817 efd: explicitly disabled via build config 00:01:46.817 eventdev: explicitly disabled via build config 00:01:46.817 dispatcher: explicitly disabled via build config 00:01:46.817 gpudev: explicitly disabled via build config 00:01:46.817 gro: explicitly disabled via build config 00:01:46.817 gso: explicitly disabled via build config 00:01:46.817 ip_frag: explicitly disabled via build config 00:01:46.817 jobstats: explicitly disabled via build config 00:01:46.817 latencystats: explicitly disabled via build config 00:01:46.817 lpm: explicitly disabled via build config 00:01:46.817 member: explicitly disabled via build config 00:01:46.817 pcapng: explicitly disabled via build config 00:01:46.817 rawdev: explicitly disabled via build config 00:01:46.817 regexdev: explicitly disabled via build config 00:01:46.817 mldev: explicitly disabled via build config 00:01:46.817 rib: explicitly disabled via build config 00:01:46.817 sched: explicitly disabled via build config 00:01:46.817 stack: explicitly disabled via build config 00:01:46.817 ipsec: explicitly disabled via build config 00:01:46.817 pdcp: explicitly disabled via build config 00:01:46.817 fib: explicitly disabled via build config 00:01:46.817 port: explicitly disabled via build config 00:01:46.817 pdump: explicitly disabled via build config 00:01:46.817 table: explicitly disabled via build config 00:01:46.817 pipeline: explicitly disabled via build config 00:01:46.817 graph: explicitly disabled via build config 00:01:46.817 node: explicitly disabled via build config 00:01:46.817 00:01:46.817 drivers: 00:01:46.817 common/cpt: not in enabled drivers build config 00:01:46.817 common/dpaax: not in enabled drivers build config 00:01:46.817 common/iavf: not in enabled drivers build config 00:01:46.817 common/idpf: not in enabled drivers build config 00:01:46.817 common/ionic: not in enabled drivers build config 00:01:46.817 common/mvep: not in enabled drivers build config 00:01:46.817 common/octeontx: not in enabled drivers build config 00:01:46.817 bus/auxiliary: not in enabled drivers build config 00:01:46.817 bus/cdx: not in enabled drivers build config 00:01:46.817 bus/dpaa: not in enabled drivers build config 00:01:46.817 bus/fslmc: not in enabled drivers build config 00:01:46.817 bus/ifpga: not in enabled drivers build config 00:01:46.817 bus/platform: not in enabled drivers build config 00:01:46.817 bus/uacce: not in enabled drivers build config 00:01:46.817 bus/vmbus: not in enabled drivers build config 00:01:46.817 common/cnxk: not in enabled drivers build config 00:01:46.817 common/mlx5: not in enabled drivers build config 00:01:46.817 common/nfp: not in enabled drivers build config 00:01:46.817 common/nitrox: not in enabled drivers build config 00:01:46.817 common/qat: not in enabled drivers build config 00:01:46.817 common/sfc_efx: not in enabled drivers build config 00:01:46.817 mempool/bucket: not in enabled drivers build config 00:01:46.817 mempool/cnxk: not in enabled drivers build config 00:01:46.817 mempool/dpaa: not in enabled drivers build config 00:01:46.817 mempool/dpaa2: not in enabled drivers build config 00:01:46.817 mempool/octeontx: not in enabled drivers build config 00:01:46.817 mempool/stack: not in enabled drivers build config 00:01:46.817 dma/cnxk: not in enabled drivers build config 00:01:46.817 dma/dpaa: not in enabled drivers build config 00:01:46.817 dma/dpaa2: not in enabled drivers build config 00:01:46.817 dma/hisilicon: not in enabled drivers build config 00:01:46.817 dma/idxd: not in enabled drivers build config 00:01:46.817 dma/ioat: not in enabled drivers build config 00:01:46.817 dma/skeleton: not in enabled drivers build config 00:01:46.817 net/af_packet: not in enabled drivers build config 00:01:46.817 net/af_xdp: not in enabled drivers build config 00:01:46.817 net/ark: not in enabled drivers build config 00:01:46.817 net/atlantic: not in enabled drivers build config 00:01:46.817 net/avp: not in enabled drivers build config 00:01:46.817 net/axgbe: not in enabled drivers build config 00:01:46.817 net/bnx2x: not in enabled drivers build config 00:01:46.817 net/bnxt: not in enabled drivers build config 00:01:46.817 net/bonding: not in enabled drivers build config 00:01:46.817 net/cnxk: not in enabled drivers build config 00:01:46.817 net/cpfl: not in enabled drivers build config 00:01:46.817 net/cxgbe: not in enabled drivers build config 00:01:46.817 net/dpaa: not in enabled drivers build config 00:01:46.817 net/dpaa2: not in enabled drivers build config 00:01:46.817 net/e1000: not in enabled drivers build config 00:01:46.817 net/ena: not in enabled drivers build config 00:01:46.817 net/enetc: not in enabled drivers build config 00:01:46.817 net/enetfec: not in enabled drivers build config 00:01:46.817 net/enic: not in enabled drivers build config 00:01:46.817 net/failsafe: not in enabled drivers build config 00:01:46.817 net/fm10k: not in enabled drivers build config 00:01:46.817 net/gve: not in enabled drivers build config 00:01:46.817 net/hinic: not in enabled drivers build config 00:01:46.817 net/hns3: not in enabled drivers build config 00:01:46.817 net/i40e: not in enabled drivers build config 00:01:46.817 net/iavf: not in enabled drivers build config 00:01:46.817 net/ice: not in enabled drivers build config 00:01:46.817 net/idpf: not in enabled drivers build config 00:01:46.817 net/igc: not in enabled drivers build config 00:01:46.817 net/ionic: not in enabled drivers build config 00:01:46.817 net/ipn3ke: not in enabled drivers build config 00:01:46.817 net/ixgbe: not in enabled drivers build config 00:01:46.817 net/mana: not in enabled drivers build config 00:01:46.817 net/memif: not in enabled drivers build config 00:01:46.817 net/mlx4: not in enabled drivers build config 00:01:46.817 net/mlx5: not in enabled drivers build config 00:01:46.817 net/mvneta: not in enabled drivers build config 00:01:46.817 net/mvpp2: not in enabled drivers build config 00:01:46.817 net/netvsc: not in enabled drivers build config 00:01:46.817 net/nfb: not in enabled drivers build config 00:01:46.817 net/nfp: not in enabled drivers build config 00:01:46.817 net/ngbe: not in enabled drivers build config 00:01:46.817 net/null: not in enabled drivers build config 00:01:46.817 net/octeontx: not in enabled drivers build config 00:01:46.817 net/octeon_ep: not in enabled drivers build config 00:01:46.817 net/pcap: not in enabled drivers build config 00:01:46.817 net/pfe: not in enabled drivers build config 00:01:46.817 net/qede: not in enabled drivers build config 00:01:46.817 net/ring: not in enabled drivers build config 00:01:46.817 net/sfc: not in enabled drivers build config 00:01:46.817 net/softnic: not in enabled drivers build config 00:01:46.817 net/tap: not in enabled drivers build config 00:01:46.817 net/thunderx: not in enabled drivers build config 00:01:46.817 net/txgbe: not in enabled drivers build config 00:01:46.817 net/vdev_netvsc: not in enabled drivers build config 00:01:46.817 net/vhost: not in enabled drivers build config 00:01:46.817 net/virtio: not in enabled drivers build config 00:01:46.817 net/vmxnet3: not in enabled drivers build config 00:01:46.817 raw/*: missing internal dependency, "rawdev" 00:01:46.817 crypto/armv8: not in enabled drivers build config 00:01:46.817 crypto/bcmfs: not in enabled drivers build config 00:01:46.817 crypto/caam_jr: not in enabled drivers build config 00:01:46.817 crypto/ccp: not in enabled drivers build config 00:01:46.817 crypto/cnxk: not in enabled drivers build config 00:01:46.817 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.817 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.817 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.817 crypto/mlx5: not in enabled drivers build config 00:01:46.817 crypto/mvsam: not in enabled drivers build config 00:01:46.817 crypto/nitrox: not in enabled drivers build config 00:01:46.817 crypto/null: not in enabled drivers build config 00:01:46.817 crypto/octeontx: not in enabled drivers build config 00:01:46.817 crypto/openssl: not in enabled drivers build config 00:01:46.817 crypto/scheduler: not in enabled drivers build config 00:01:46.817 crypto/uadk: not in enabled drivers build config 00:01:46.817 crypto/virtio: not in enabled drivers build config 00:01:46.817 compress/isal: not in enabled drivers build config 00:01:46.817 compress/mlx5: not in enabled drivers build config 00:01:46.817 compress/nitrox: not in enabled drivers build config 00:01:46.817 compress/octeontx: not in enabled drivers build config 00:01:46.817 compress/zlib: not in enabled drivers build config 00:01:46.817 regex/*: missing internal dependency, "regexdev" 00:01:46.817 ml/*: missing internal dependency, "mldev" 00:01:46.817 vdpa/ifc: not in enabled drivers build config 00:01:46.817 vdpa/mlx5: not in enabled drivers build config 00:01:46.817 vdpa/nfp: not in enabled drivers build config 00:01:46.817 vdpa/sfc: not in enabled drivers build config 00:01:46.817 event/*: missing internal dependency, "eventdev" 00:01:46.817 baseband/*: missing internal dependency, "bbdev" 00:01:46.817 gpu/*: missing internal dependency, "gpudev" 00:01:46.817 00:01:46.817 00:01:46.817 Build targets in project: 85 00:01:46.817 00:01:46.817 DPDK 24.03.0 00:01:46.817 00:01:46.817 User defined options 00:01:46.817 buildtype : debug 00:01:46.817 default_library : shared 00:01:46.817 libdir : lib 00:01:46.817 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:46.817 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:46.817 c_link_args : 00:01:46.817 cpu_instruction_set: native 00:01:46.817 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:46.817 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:46.817 enable_docs : false 00:01:46.817 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:46.817 enable_kmods : false 00:01:46.817 max_lcores : 128 00:01:46.817 tests : false 00:01:46.818 00:01:46.818 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.818 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:47.087 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:47.087 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.087 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:47.087 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:47.087 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.087 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:47.087 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.087 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.087 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.087 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.087 [11/268] Linking static target lib/librte_kvargs.a 00:01:47.087 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.087 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.087 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:47.087 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.087 [16/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:47.087 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.087 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.087 [19/268] Linking static target lib/librte_log.a 00:01:47.345 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:47.345 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:47.345 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:47.345 [23/268] Linking static target lib/librte_pci.a 00:01:47.345 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:47.345 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:47.345 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:47.603 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:47.603 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.603 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:47.603 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.603 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.603 [32/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.603 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.603 [34/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:47.603 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.603 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.603 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.603 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:47.603 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.603 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:47.603 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.603 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:47.603 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.603 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.603 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:47.603 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.603 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:47.603 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:47.603 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.603 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:47.603 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.603 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:47.603 [53/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:47.603 [54/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:47.603 [55/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.603 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.603 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:47.603 [58/268] Linking static target lib/librte_ring.a 00:01:47.603 [59/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:47.603 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.603 [61/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.603 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.603 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.603 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.603 [65/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:47.603 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.603 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:47.603 [68/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:47.603 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:47.603 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.603 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.603 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.603 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.603 [74/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:47.603 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:47.603 [76/268] Linking static target lib/librte_telemetry.a 00:01:47.603 [77/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:47.603 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:47.603 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.603 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:47.603 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.603 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:47.603 [83/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:47.603 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:47.603 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.603 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:47.603 [87/268] Linking static target lib/librte_meter.a 00:01:47.603 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:47.603 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.603 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:47.603 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:47.603 [92/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:47.603 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:47.863 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.863 [95/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:47.863 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:47.863 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.863 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:47.863 [99/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:47.863 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.863 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:47.863 [102/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:47.863 [103/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:47.863 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:47.863 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.863 [106/268] Linking static target lib/librte_mempool.a 00:01:47.863 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.863 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.863 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:47.863 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:47.863 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:47.863 [112/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:47.863 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.863 [114/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:47.863 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:47.863 [116/268] Linking static target lib/librte_net.a 00:01:47.863 [117/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:47.863 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:47.863 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:47.863 [120/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:47.863 [121/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:47.863 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.863 [123/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:47.863 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:47.863 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:47.863 [126/268] Linking static target lib/librte_cmdline.a 00:01:47.863 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:47.863 [128/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:47.863 [129/268] Linking static target lib/librte_rcu.a 00:01:47.863 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.863 [131/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:47.863 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:47.863 [133/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.863 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:47.863 [135/268] Linking static target lib/librte_eal.a 00:01:47.863 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.863 [137/268] Linking target lib/librte_log.so.24.1 00:01:47.863 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.121 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:48.121 [140/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.121 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:48.121 [142/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.121 [143/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.121 [144/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.121 [145/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.121 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.121 [147/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.121 [148/268] Linking static target lib/librte_mbuf.a 00:01:48.121 [149/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.121 [150/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.121 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:48.121 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:48.121 [153/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:48.121 [154/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.121 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:48.121 [156/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:48.121 [157/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.121 [158/268] Linking static target lib/librte_timer.a 00:01:48.121 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:48.121 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.121 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.121 [162/268] Linking target lib/librte_kvargs.so.24.1 00:01:48.121 [163/268] Linking target lib/librte_telemetry.so.24.1 00:01:48.121 [164/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.121 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:48.121 [166/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.121 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.121 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.121 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.121 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:48.121 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.121 [172/268] Linking static target lib/librte_compressdev.a 00:01:48.121 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:48.379 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.379 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.379 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.379 [177/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.379 [178/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.379 [179/268] Linking static target lib/librte_dmadev.a 00:01:48.379 [180/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:48.379 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:48.379 [182/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.379 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.379 [184/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.379 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.379 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.379 [187/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:48.379 [188/268] Linking static target drivers/librte_bus_vdev.a 00:01:48.379 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.379 [190/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.379 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.379 [192/268] Linking static target lib/librte_power.a 00:01:48.379 [193/268] Linking static target lib/librte_security.a 00:01:48.379 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.379 [195/268] Linking static target lib/librte_reorder.a 00:01:48.379 [196/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:48.379 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.379 [198/268] Linking static target lib/librte_hash.a 00:01:48.379 [199/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:48.379 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:48.379 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.379 [202/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.638 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.638 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:48.638 [205/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:48.638 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.638 [207/268] Linking static target lib/librte_cryptodev.a 00:01:48.638 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.638 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:48.638 [210/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.638 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.638 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.638 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:48.638 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:48.638 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.638 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.896 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:48.896 [218/268] Linking static target lib/librte_ethdev.a 00:01:48.896 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.896 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.896 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.896 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.896 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.896 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.154 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.154 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.154 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.088 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.088 [229/268] Linking static target lib/librte_vhost.a 00:01:50.347 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.724 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.989 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.554 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.554 [234/268] Linking target lib/librte_eal.so.24.1 00:01:57.554 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:57.554 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:57.554 [237/268] Linking target lib/librte_ring.so.24.1 00:01:57.554 [238/268] Linking target lib/librte_meter.so.24.1 00:01:57.554 [239/268] Linking target lib/librte_pci.so.24.1 00:01:57.554 [240/268] Linking target lib/librte_timer.so.24.1 00:01:57.811 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:57.811 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:57.811 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:57.811 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:57.811 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:57.811 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:57.811 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:57.811 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:57.811 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:58.068 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:58.068 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:58.068 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:58.068 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:58.068 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:58.325 [255/268] Linking target lib/librte_net.so.24.1 00:01:58.325 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:58.325 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:58.325 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:58.325 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:58.325 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:58.325 [261/268] Linking target lib/librte_hash.so.24.1 00:01:58.325 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:58.325 [263/268] Linking target lib/librte_security.so.24.1 00:01:58.325 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:58.580 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:58.580 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:58.580 [267/268] Linking target lib/librte_power.so.24.1 00:01:58.580 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:58.580 INFO: autodetecting backend as ninja 00:01:58.580 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:59.582 CC lib/ut_mock/mock.o 00:01:59.582 CC lib/ut/ut.o 00:01:59.582 CC lib/log/log.o 00:01:59.582 CC lib/log/log_deprecated.o 00:01:59.582 CC lib/log/log_flags.o 00:01:59.582 LIB libspdk_ut_mock.a 00:01:59.840 LIB libspdk_ut.a 00:01:59.840 LIB libspdk_log.a 00:01:59.840 SO libspdk_ut_mock.so.6.0 00:01:59.840 SO libspdk_ut.so.2.0 00:01:59.840 SO libspdk_log.so.7.0 00:01:59.840 SYMLINK libspdk_ut_mock.so 00:01:59.840 SYMLINK libspdk_ut.so 00:01:59.840 SYMLINK libspdk_log.so 00:02:00.097 CC lib/dma/dma.o 00:02:00.097 CC lib/ioat/ioat.o 00:02:00.097 CC lib/util/base64.o 00:02:00.097 CC lib/util/bit_array.o 00:02:00.097 CC lib/util/cpuset.o 00:02:00.097 CC lib/util/crc16.o 00:02:00.097 CC lib/util/crc32c.o 00:02:00.097 CC lib/util/crc32.o 00:02:00.097 CC lib/util/crc32_ieee.o 00:02:00.097 CC lib/util/crc64.o 00:02:00.097 CC lib/util/dif.o 00:02:00.097 CC lib/util/fd.o 00:02:00.097 CXX lib/trace_parser/trace.o 00:02:00.097 CC lib/util/fd_group.o 00:02:00.097 CC lib/util/file.o 00:02:00.097 CC lib/util/hexlify.o 00:02:00.097 CC lib/util/iov.o 00:02:00.097 CC lib/util/net.o 00:02:00.097 CC lib/util/math.o 00:02:00.097 CC lib/util/pipe.o 00:02:00.097 CC lib/util/uuid.o 00:02:00.097 CC lib/util/strerror_tls.o 00:02:00.097 CC lib/util/string.o 00:02:00.097 CC lib/util/xor.o 00:02:00.097 CC lib/util/zipf.o 00:02:00.354 CC lib/vfio_user/host/vfio_user.o 00:02:00.354 CC lib/vfio_user/host/vfio_user_pci.o 00:02:00.354 LIB libspdk_dma.a 00:02:00.354 SO libspdk_dma.so.4.0 00:02:00.354 SYMLINK libspdk_dma.so 00:02:00.354 LIB libspdk_ioat.a 00:02:00.354 SO libspdk_ioat.so.7.0 00:02:00.354 SYMLINK libspdk_ioat.so 00:02:00.611 LIB libspdk_vfio_user.a 00:02:00.611 SO libspdk_vfio_user.so.5.0 00:02:00.611 LIB libspdk_util.a 00:02:00.611 SYMLINK libspdk_vfio_user.so 00:02:00.611 SO libspdk_util.so.9.1 00:02:00.611 SYMLINK libspdk_util.so 00:02:00.869 LIB libspdk_trace_parser.a 00:02:00.869 SO libspdk_trace_parser.so.5.0 00:02:00.869 SYMLINK libspdk_trace_parser.so 00:02:01.125 CC lib/json/json_util.o 00:02:01.125 CC lib/json/json_parse.o 00:02:01.125 CC lib/json/json_write.o 00:02:01.125 CC lib/env_dpdk/env.o 00:02:01.125 CC lib/conf/conf.o 00:02:01.125 CC lib/env_dpdk/memory.o 00:02:01.125 CC lib/env_dpdk/pci.o 00:02:01.125 CC lib/env_dpdk/init.o 00:02:01.125 CC lib/env_dpdk/pci_ioat.o 00:02:01.125 CC lib/env_dpdk/threads.o 00:02:01.125 CC lib/env_dpdk/pci_virtio.o 00:02:01.125 CC lib/env_dpdk/pci_vmd.o 00:02:01.125 CC lib/env_dpdk/pci_idxd.o 00:02:01.125 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:01.125 CC lib/env_dpdk/pci_event.o 00:02:01.125 CC lib/rdma_provider/common.o 00:02:01.125 CC lib/env_dpdk/sigbus_handler.o 00:02:01.125 CC lib/env_dpdk/pci_dpdk.o 00:02:01.125 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:01.125 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:01.125 CC lib/rdma_utils/rdma_utils.o 00:02:01.125 CC lib/vmd/vmd.o 00:02:01.125 CC lib/vmd/led.o 00:02:01.125 CC lib/idxd/idxd.o 00:02:01.125 CC lib/idxd/idxd_user.o 00:02:01.125 CC lib/idxd/idxd_kernel.o 00:02:01.125 LIB libspdk_conf.a 00:02:01.125 LIB libspdk_rdma_provider.a 00:02:01.382 SO libspdk_rdma_provider.so.6.0 00:02:01.382 SO libspdk_conf.so.6.0 00:02:01.382 LIB libspdk_rdma_utils.a 00:02:01.382 LIB libspdk_json.a 00:02:01.382 SO libspdk_rdma_utils.so.1.0 00:02:01.382 SYMLINK libspdk_conf.so 00:02:01.382 SYMLINK libspdk_rdma_provider.so 00:02:01.382 SO libspdk_json.so.6.0 00:02:01.382 SYMLINK libspdk_rdma_utils.so 00:02:01.382 SYMLINK libspdk_json.so 00:02:01.382 LIB libspdk_idxd.a 00:02:01.639 SO libspdk_idxd.so.12.0 00:02:01.639 LIB libspdk_vmd.a 00:02:01.639 SO libspdk_vmd.so.6.0 00:02:01.639 SYMLINK libspdk_idxd.so 00:02:01.639 SYMLINK libspdk_vmd.so 00:02:01.639 CC lib/jsonrpc/jsonrpc_server.o 00:02:01.639 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:01.639 CC lib/jsonrpc/jsonrpc_client.o 00:02:01.639 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:01.897 LIB libspdk_jsonrpc.a 00:02:01.897 SO libspdk_jsonrpc.so.6.0 00:02:01.897 SYMLINK libspdk_jsonrpc.so 00:02:02.154 LIB libspdk_env_dpdk.a 00:02:02.154 SO libspdk_env_dpdk.so.15.0 00:02:02.154 SYMLINK libspdk_env_dpdk.so 00:02:02.154 CC lib/rpc/rpc.o 00:02:02.412 LIB libspdk_rpc.a 00:02:02.412 SO libspdk_rpc.so.6.0 00:02:02.412 SYMLINK libspdk_rpc.so 00:02:02.669 CC lib/trace/trace.o 00:02:02.669 CC lib/trace/trace_flags.o 00:02:02.669 CC lib/trace/trace_rpc.o 00:02:02.926 CC lib/notify/notify.o 00:02:02.926 CC lib/notify/notify_rpc.o 00:02:02.926 CC lib/keyring/keyring.o 00:02:02.926 CC lib/keyring/keyring_rpc.o 00:02:02.926 LIB libspdk_notify.a 00:02:02.926 SO libspdk_notify.so.6.0 00:02:02.926 LIB libspdk_trace.a 00:02:02.926 LIB libspdk_keyring.a 00:02:02.926 SO libspdk_trace.so.10.0 00:02:02.926 SYMLINK libspdk_notify.so 00:02:03.194 SO libspdk_keyring.so.1.0 00:02:03.194 SYMLINK libspdk_trace.so 00:02:03.194 SYMLINK libspdk_keyring.so 00:02:03.451 CC lib/thread/thread.o 00:02:03.451 CC lib/thread/iobuf.o 00:02:03.451 CC lib/sock/sock.o 00:02:03.451 CC lib/sock/sock_rpc.o 00:02:03.709 LIB libspdk_sock.a 00:02:03.709 SO libspdk_sock.so.10.0 00:02:03.709 SYMLINK libspdk_sock.so 00:02:03.967 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:03.967 CC lib/nvme/nvme_ctrlr.o 00:02:03.967 CC lib/nvme/nvme_ns_cmd.o 00:02:03.967 CC lib/nvme/nvme_fabric.o 00:02:03.967 CC lib/nvme/nvme_ns.o 00:02:03.967 CC lib/nvme/nvme_pcie_common.o 00:02:03.967 CC lib/nvme/nvme_qpair.o 00:02:03.967 CC lib/nvme/nvme_pcie.o 00:02:03.967 CC lib/nvme/nvme.o 00:02:03.967 CC lib/nvme/nvme_quirks.o 00:02:03.967 CC lib/nvme/nvme_transport.o 00:02:03.967 CC lib/nvme/nvme_discovery.o 00:02:03.967 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:03.967 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:03.967 CC lib/nvme/nvme_tcp.o 00:02:03.967 CC lib/nvme/nvme_opal.o 00:02:03.967 CC lib/nvme/nvme_io_msg.o 00:02:03.967 CC lib/nvme/nvme_poll_group.o 00:02:03.967 CC lib/nvme/nvme_zns.o 00:02:03.967 CC lib/nvme/nvme_stubs.o 00:02:03.967 CC lib/nvme/nvme_auth.o 00:02:03.967 CC lib/nvme/nvme_cuse.o 00:02:03.967 CC lib/nvme/nvme_vfio_user.o 00:02:03.967 CC lib/nvme/nvme_rdma.o 00:02:04.532 LIB libspdk_thread.a 00:02:04.532 SO libspdk_thread.so.10.1 00:02:04.532 SYMLINK libspdk_thread.so 00:02:04.790 CC lib/accel/accel.o 00:02:04.790 CC lib/accel/accel_rpc.o 00:02:04.790 CC lib/accel/accel_sw.o 00:02:04.790 CC lib/blob/blobstore.o 00:02:04.790 CC lib/blob/request.o 00:02:04.790 CC lib/blob/zeroes.o 00:02:04.790 CC lib/blob/blob_bs_dev.o 00:02:04.790 CC lib/virtio/virtio.o 00:02:04.790 CC lib/virtio/virtio_vhost_user.o 00:02:04.790 CC lib/virtio/virtio_vfio_user.o 00:02:04.790 CC lib/virtio/virtio_pci.o 00:02:04.790 CC lib/vfu_tgt/tgt_endpoint.o 00:02:04.790 CC lib/vfu_tgt/tgt_rpc.o 00:02:04.790 CC lib/init/json_config.o 00:02:04.790 CC lib/init/subsystem.o 00:02:04.790 CC lib/init/rpc.o 00:02:04.790 CC lib/init/subsystem_rpc.o 00:02:05.048 LIB libspdk_init.a 00:02:05.048 SO libspdk_init.so.5.0 00:02:05.048 LIB libspdk_virtio.a 00:02:05.048 LIB libspdk_vfu_tgt.a 00:02:05.048 SO libspdk_virtio.so.7.0 00:02:05.048 SYMLINK libspdk_init.so 00:02:05.048 SO libspdk_vfu_tgt.so.3.0 00:02:05.048 SYMLINK libspdk_virtio.so 00:02:05.048 SYMLINK libspdk_vfu_tgt.so 00:02:05.306 CC lib/event/app.o 00:02:05.306 CC lib/event/reactor.o 00:02:05.306 CC lib/event/log_rpc.o 00:02:05.306 CC lib/event/app_rpc.o 00:02:05.306 CC lib/event/scheduler_static.o 00:02:05.564 LIB libspdk_accel.a 00:02:05.564 SO libspdk_accel.so.15.1 00:02:05.564 SYMLINK libspdk_accel.so 00:02:05.564 LIB libspdk_event.a 00:02:05.822 LIB libspdk_nvme.a 00:02:05.822 SO libspdk_event.so.14.0 00:02:05.822 SO libspdk_nvme.so.13.1 00:02:05.822 SYMLINK libspdk_event.so 00:02:05.822 CC lib/bdev/bdev.o 00:02:05.822 CC lib/bdev/bdev_rpc.o 00:02:05.822 CC lib/bdev/bdev_zone.o 00:02:05.822 CC lib/bdev/part.o 00:02:05.822 CC lib/bdev/scsi_nvme.o 00:02:06.079 SYMLINK libspdk_nvme.so 00:02:07.011 LIB libspdk_blob.a 00:02:07.011 SO libspdk_blob.so.11.0 00:02:07.011 SYMLINK libspdk_blob.so 00:02:07.267 CC lib/blobfs/blobfs.o 00:02:07.267 CC lib/blobfs/tree.o 00:02:07.267 CC lib/lvol/lvol.o 00:02:07.524 LIB libspdk_bdev.a 00:02:07.524 SO libspdk_bdev.so.16.0 00:02:07.782 SYMLINK libspdk_bdev.so 00:02:07.782 LIB libspdk_blobfs.a 00:02:07.782 SO libspdk_blobfs.so.10.0 00:02:07.782 LIB libspdk_lvol.a 00:02:07.782 SYMLINK libspdk_blobfs.so 00:02:07.782 SO libspdk_lvol.so.10.0 00:02:08.040 SYMLINK libspdk_lvol.so 00:02:08.040 CC lib/nbd/nbd.o 00:02:08.040 CC lib/nbd/nbd_rpc.o 00:02:08.040 CC lib/nvmf/ctrlr.o 00:02:08.040 CC lib/nvmf/ctrlr_bdev.o 00:02:08.040 CC lib/nvmf/ctrlr_discovery.o 00:02:08.040 CC lib/nvmf/subsystem.o 00:02:08.040 CC lib/nvmf/nvmf.o 00:02:08.040 CC lib/nvmf/nvmf_rpc.o 00:02:08.040 CC lib/nvmf/tcp.o 00:02:08.040 CC lib/nvmf/transport.o 00:02:08.040 CC lib/ftl/ftl_core.o 00:02:08.040 CC lib/nvmf/stubs.o 00:02:08.040 CC lib/ftl/ftl_init.o 00:02:08.040 CC lib/nvmf/mdns_server.o 00:02:08.040 CC lib/ftl/ftl_layout.o 00:02:08.040 CC lib/nvmf/vfio_user.o 00:02:08.040 CC lib/ftl/ftl_debug.o 00:02:08.040 CC lib/nvmf/rdma.o 00:02:08.040 CC lib/ftl/ftl_io.o 00:02:08.040 CC lib/nvmf/auth.o 00:02:08.040 CC lib/ftl/ftl_sb.o 00:02:08.040 CC lib/ftl/ftl_l2p.o 00:02:08.040 CC lib/ftl/ftl_l2p_flat.o 00:02:08.040 CC lib/ftl/ftl_band.o 00:02:08.040 CC lib/ftl/ftl_nv_cache.o 00:02:08.040 CC lib/ftl/ftl_band_ops.o 00:02:08.040 CC lib/ftl/ftl_rq.o 00:02:08.040 CC lib/ftl/ftl_writer.o 00:02:08.040 CC lib/ftl/ftl_reloc.o 00:02:08.040 CC lib/ftl/ftl_l2p_cache.o 00:02:08.040 CC lib/scsi/dev.o 00:02:08.040 CC lib/ftl/ftl_p2l.o 00:02:08.040 CC lib/scsi/lun.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt.o 00:02:08.040 CC lib/scsi/port.o 00:02:08.040 CC lib/scsi/scsi.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:08.040 CC lib/scsi/scsi_bdev.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:08.040 CC lib/scsi/scsi_pr.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:08.040 CC lib/ublk/ublk.o 00:02:08.040 CC lib/scsi/scsi_rpc.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:08.040 CC lib/ublk/ublk_rpc.o 00:02:08.040 CC lib/scsi/task.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:08.040 CC lib/ftl/utils/ftl_conf.o 00:02:08.040 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:08.040 CC lib/ftl/utils/ftl_mempool.o 00:02:08.040 CC lib/ftl/utils/ftl_md.o 00:02:08.040 CC lib/ftl/utils/ftl_bitmap.o 00:02:08.040 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:08.040 CC lib/ftl/utils/ftl_property.o 00:02:08.040 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:08.040 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:08.040 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:08.040 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:08.040 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:08.040 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:08.040 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:08.040 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:08.040 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:08.040 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:08.040 CC lib/ftl/base/ftl_base_dev.o 00:02:08.040 CC lib/ftl/base/ftl_base_bdev.o 00:02:08.040 CC lib/ftl/ftl_trace.o 00:02:08.605 LIB libspdk_nbd.a 00:02:08.605 SO libspdk_nbd.so.7.0 00:02:08.605 SYMLINK libspdk_nbd.so 00:02:08.605 LIB libspdk_scsi.a 00:02:08.605 LIB libspdk_ublk.a 00:02:08.605 SO libspdk_scsi.so.9.0 00:02:08.863 SO libspdk_ublk.so.3.0 00:02:08.863 SYMLINK libspdk_ublk.so 00:02:08.863 SYMLINK libspdk_scsi.so 00:02:08.863 LIB libspdk_ftl.a 00:02:09.121 SO libspdk_ftl.so.9.0 00:02:09.121 CC lib/vhost/vhost.o 00:02:09.121 CC lib/vhost/vhost_scsi.o 00:02:09.121 CC lib/vhost/vhost_rpc.o 00:02:09.121 CC lib/vhost/vhost_blk.o 00:02:09.121 CC lib/vhost/rte_vhost_user.o 00:02:09.121 CC lib/iscsi/conn.o 00:02:09.121 CC lib/iscsi/init_grp.o 00:02:09.121 CC lib/iscsi/iscsi.o 00:02:09.121 CC lib/iscsi/md5.o 00:02:09.121 CC lib/iscsi/param.o 00:02:09.121 CC lib/iscsi/tgt_node.o 00:02:09.121 CC lib/iscsi/portal_grp.o 00:02:09.121 CC lib/iscsi/iscsi_subsystem.o 00:02:09.121 CC lib/iscsi/iscsi_rpc.o 00:02:09.121 CC lib/iscsi/task.o 00:02:09.379 SYMLINK libspdk_ftl.so 00:02:09.638 LIB libspdk_nvmf.a 00:02:09.896 SO libspdk_nvmf.so.19.0 00:02:09.896 LIB libspdk_vhost.a 00:02:09.896 SO libspdk_vhost.so.8.0 00:02:09.896 SYMLINK libspdk_nvmf.so 00:02:09.896 SYMLINK libspdk_vhost.so 00:02:10.154 LIB libspdk_iscsi.a 00:02:10.154 SO libspdk_iscsi.so.8.0 00:02:10.154 SYMLINK libspdk_iscsi.so 00:02:10.717 CC module/vfu_device/vfu_virtio.o 00:02:10.717 CC module/vfu_device/vfu_virtio_blk.o 00:02:10.717 CC module/vfu_device/vfu_virtio_scsi.o 00:02:10.717 CC module/vfu_device/vfu_virtio_rpc.o 00:02:10.717 CC module/env_dpdk/env_dpdk_rpc.o 00:02:10.975 CC module/accel/ioat/accel_ioat.o 00:02:10.975 CC module/accel/ioat/accel_ioat_rpc.o 00:02:10.975 LIB libspdk_env_dpdk_rpc.a 00:02:10.975 CC module/accel/dsa/accel_dsa.o 00:02:10.975 CC module/accel/dsa/accel_dsa_rpc.o 00:02:10.975 CC module/sock/posix/posix.o 00:02:10.975 CC module/accel/error/accel_error.o 00:02:10.975 CC module/keyring/linux/keyring.o 00:02:10.975 CC module/accel/error/accel_error_rpc.o 00:02:10.975 CC module/keyring/linux/keyring_rpc.o 00:02:10.975 CC module/keyring/file/keyring.o 00:02:10.975 CC module/keyring/file/keyring_rpc.o 00:02:10.975 CC module/scheduler/gscheduler/gscheduler.o 00:02:10.975 CC module/accel/iaa/accel_iaa.o 00:02:10.975 CC module/accel/iaa/accel_iaa_rpc.o 00:02:10.975 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:10.975 SO libspdk_env_dpdk_rpc.so.6.0 00:02:10.975 CC module/blob/bdev/blob_bdev.o 00:02:10.975 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:10.975 SYMLINK libspdk_env_dpdk_rpc.so 00:02:10.975 LIB libspdk_keyring_linux.a 00:02:10.975 LIB libspdk_accel_ioat.a 00:02:10.975 LIB libspdk_keyring_file.a 00:02:10.975 LIB libspdk_scheduler_dpdk_governor.a 00:02:10.975 SO libspdk_keyring_linux.so.1.0 00:02:10.975 LIB libspdk_scheduler_gscheduler.a 00:02:10.975 LIB libspdk_accel_error.a 00:02:10.975 SO libspdk_accel_ioat.so.6.0 00:02:10.975 SO libspdk_keyring_file.so.1.0 00:02:10.975 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:10.975 LIB libspdk_accel_iaa.a 00:02:10.975 SO libspdk_scheduler_gscheduler.so.4.0 00:02:10.975 SO libspdk_accel_error.so.2.0 00:02:11.232 LIB libspdk_scheduler_dynamic.a 00:02:11.232 SYMLINK libspdk_keyring_linux.so 00:02:11.232 LIB libspdk_accel_dsa.a 00:02:11.232 SO libspdk_accel_iaa.so.3.0 00:02:11.232 SYMLINK libspdk_accel_ioat.so 00:02:11.232 SYMLINK libspdk_keyring_file.so 00:02:11.232 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:11.232 SO libspdk_scheduler_dynamic.so.4.0 00:02:11.232 SYMLINK libspdk_scheduler_gscheduler.so 00:02:11.232 LIB libspdk_blob_bdev.a 00:02:11.232 SYMLINK libspdk_accel_error.so 00:02:11.232 SO libspdk_accel_dsa.so.5.0 00:02:11.232 SYMLINK libspdk_accel_iaa.so 00:02:11.232 SO libspdk_blob_bdev.so.11.0 00:02:11.232 SYMLINK libspdk_scheduler_dynamic.so 00:02:11.232 SYMLINK libspdk_accel_dsa.so 00:02:11.232 LIB libspdk_vfu_device.a 00:02:11.232 SYMLINK libspdk_blob_bdev.so 00:02:11.232 SO libspdk_vfu_device.so.3.0 00:02:11.232 SYMLINK libspdk_vfu_device.so 00:02:11.491 LIB libspdk_sock_posix.a 00:02:11.491 SO libspdk_sock_posix.so.6.0 00:02:11.491 SYMLINK libspdk_sock_posix.so 00:02:11.748 CC module/bdev/ftl/bdev_ftl.o 00:02:11.748 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:11.748 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:11.748 CC module/bdev/iscsi/bdev_iscsi.o 00:02:11.748 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:11.748 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:11.748 CC module/bdev/error/vbdev_error_rpc.o 00:02:11.748 CC module/bdev/error/vbdev_error.o 00:02:11.748 CC module/bdev/raid/bdev_raid.o 00:02:11.748 CC module/bdev/raid/bdev_raid_rpc.o 00:02:11.748 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:11.748 CC module/bdev/raid/bdev_raid_sb.o 00:02:11.748 CC module/bdev/malloc/bdev_malloc.o 00:02:11.748 CC module/bdev/nvme/bdev_nvme.o 00:02:11.748 CC module/bdev/aio/bdev_aio.o 00:02:11.748 CC module/bdev/aio/bdev_aio_rpc.o 00:02:11.748 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:11.748 CC module/bdev/raid/raid0.o 00:02:11.748 CC module/bdev/raid/raid1.o 00:02:11.748 CC module/bdev/raid/concat.o 00:02:11.748 CC module/bdev/nvme/nvme_rpc.o 00:02:11.748 CC module/bdev/nvme/bdev_mdns_client.o 00:02:11.748 CC module/bdev/nvme/vbdev_opal.o 00:02:11.748 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:11.748 CC module/blobfs/bdev/blobfs_bdev.o 00:02:11.748 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:11.748 CC module/bdev/delay/vbdev_delay.o 00:02:11.748 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:11.748 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:11.748 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:11.748 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:11.748 CC module/bdev/split/vbdev_split.o 00:02:11.748 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:11.748 CC module/bdev/split/vbdev_split_rpc.o 00:02:11.748 CC module/bdev/gpt/gpt.o 00:02:11.748 CC module/bdev/null/bdev_null.o 00:02:11.748 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:11.748 CC module/bdev/null/bdev_null_rpc.o 00:02:11.748 CC module/bdev/gpt/vbdev_gpt.o 00:02:11.748 CC module/bdev/lvol/vbdev_lvol.o 00:02:11.748 CC module/bdev/passthru/vbdev_passthru.o 00:02:11.748 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:12.006 LIB libspdk_blobfs_bdev.a 00:02:12.006 SO libspdk_blobfs_bdev.so.6.0 00:02:12.006 LIB libspdk_bdev_split.a 00:02:12.006 LIB libspdk_bdev_error.a 00:02:12.006 LIB libspdk_bdev_null.a 00:02:12.006 LIB libspdk_bdev_ftl.a 00:02:12.006 SO libspdk_bdev_error.so.6.0 00:02:12.006 SO libspdk_bdev_split.so.6.0 00:02:12.006 SO libspdk_bdev_null.so.6.0 00:02:12.006 LIB libspdk_bdev_gpt.a 00:02:12.006 LIB libspdk_bdev_zone_block.a 00:02:12.006 SYMLINK libspdk_blobfs_bdev.so 00:02:12.006 SO libspdk_bdev_ftl.so.6.0 00:02:12.006 LIB libspdk_bdev_aio.a 00:02:12.006 SO libspdk_bdev_gpt.so.6.0 00:02:12.006 LIB libspdk_bdev_iscsi.a 00:02:12.006 SO libspdk_bdev_zone_block.so.6.0 00:02:12.006 LIB libspdk_bdev_passthru.a 00:02:12.006 SO libspdk_bdev_aio.so.6.0 00:02:12.006 SYMLINK libspdk_bdev_error.so 00:02:12.006 SYMLINK libspdk_bdev_split.so 00:02:12.006 SYMLINK libspdk_bdev_null.so 00:02:12.006 LIB libspdk_bdev_delay.a 00:02:12.006 SO libspdk_bdev_passthru.so.6.0 00:02:12.006 SYMLINK libspdk_bdev_ftl.so 00:02:12.006 SO libspdk_bdev_iscsi.so.6.0 00:02:12.006 LIB libspdk_bdev_malloc.a 00:02:12.006 SO libspdk_bdev_delay.so.6.0 00:02:12.006 SYMLINK libspdk_bdev_zone_block.so 00:02:12.006 SYMLINK libspdk_bdev_gpt.so 00:02:12.269 SO libspdk_bdev_malloc.so.6.0 00:02:12.269 SYMLINK libspdk_bdev_aio.so 00:02:12.269 SYMLINK libspdk_bdev_passthru.so 00:02:12.269 SYMLINK libspdk_bdev_iscsi.so 00:02:12.269 SYMLINK libspdk_bdev_delay.so 00:02:12.269 LIB libspdk_bdev_virtio.a 00:02:12.269 SYMLINK libspdk_bdev_malloc.so 00:02:12.269 LIB libspdk_bdev_lvol.a 00:02:12.269 SO libspdk_bdev_virtio.so.6.0 00:02:12.269 SO libspdk_bdev_lvol.so.6.0 00:02:12.269 SYMLINK libspdk_bdev_virtio.so 00:02:12.269 SYMLINK libspdk_bdev_lvol.so 00:02:12.527 LIB libspdk_bdev_raid.a 00:02:12.527 SO libspdk_bdev_raid.so.6.0 00:02:12.527 SYMLINK libspdk_bdev_raid.so 00:02:13.460 LIB libspdk_bdev_nvme.a 00:02:13.460 SO libspdk_bdev_nvme.so.7.0 00:02:13.460 SYMLINK libspdk_bdev_nvme.so 00:02:14.026 CC module/event/subsystems/keyring/keyring.o 00:02:14.026 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:14.026 CC module/event/subsystems/iobuf/iobuf.o 00:02:14.026 CC module/event/subsystems/vmd/vmd.o 00:02:14.026 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:14.026 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:14.026 CC module/event/subsystems/sock/sock.o 00:02:14.026 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:14.026 CC module/event/subsystems/scheduler/scheduler.o 00:02:14.026 LIB libspdk_event_keyring.a 00:02:14.026 SO libspdk_event_keyring.so.1.0 00:02:14.026 LIB libspdk_event_iobuf.a 00:02:14.026 LIB libspdk_event_vmd.a 00:02:14.026 LIB libspdk_event_vhost_blk.a 00:02:14.026 LIB libspdk_event_sock.a 00:02:14.286 LIB libspdk_event_vfu_tgt.a 00:02:14.286 LIB libspdk_event_scheduler.a 00:02:14.286 SO libspdk_event_iobuf.so.3.0 00:02:14.286 SO libspdk_event_vmd.so.6.0 00:02:14.286 SO libspdk_event_vhost_blk.so.3.0 00:02:14.286 SO libspdk_event_sock.so.5.0 00:02:14.286 SO libspdk_event_vfu_tgt.so.3.0 00:02:14.286 SO libspdk_event_scheduler.so.4.0 00:02:14.286 SYMLINK libspdk_event_keyring.so 00:02:14.286 SYMLINK libspdk_event_iobuf.so 00:02:14.286 SYMLINK libspdk_event_sock.so 00:02:14.286 SYMLINK libspdk_event_vhost_blk.so 00:02:14.286 SYMLINK libspdk_event_vfu_tgt.so 00:02:14.286 SYMLINK libspdk_event_scheduler.so 00:02:14.286 SYMLINK libspdk_event_vmd.so 00:02:14.545 CC module/event/subsystems/accel/accel.o 00:02:14.545 LIB libspdk_event_accel.a 00:02:14.803 SO libspdk_event_accel.so.6.0 00:02:14.803 SYMLINK libspdk_event_accel.so 00:02:15.061 CC module/event/subsystems/bdev/bdev.o 00:02:15.062 LIB libspdk_event_bdev.a 00:02:15.320 SO libspdk_event_bdev.so.6.0 00:02:15.320 SYMLINK libspdk_event_bdev.so 00:02:15.578 CC module/event/subsystems/scsi/scsi.o 00:02:15.578 CC module/event/subsystems/nbd/nbd.o 00:02:15.578 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:15.578 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:15.578 CC module/event/subsystems/ublk/ublk.o 00:02:15.578 LIB libspdk_event_scsi.a 00:02:15.578 LIB libspdk_event_nbd.a 00:02:15.578 SO libspdk_event_scsi.so.6.0 00:02:15.578 LIB libspdk_event_ublk.a 00:02:15.836 SO libspdk_event_nbd.so.6.0 00:02:15.836 LIB libspdk_event_nvmf.a 00:02:15.836 SO libspdk_event_ublk.so.3.0 00:02:15.836 SYMLINK libspdk_event_scsi.so 00:02:15.836 SYMLINK libspdk_event_nbd.so 00:02:15.836 SO libspdk_event_nvmf.so.6.0 00:02:15.836 SYMLINK libspdk_event_ublk.so 00:02:15.836 SYMLINK libspdk_event_nvmf.so 00:02:16.095 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:16.095 CC module/event/subsystems/iscsi/iscsi.o 00:02:16.095 LIB libspdk_event_vhost_scsi.a 00:02:16.095 SO libspdk_event_vhost_scsi.so.3.0 00:02:16.095 LIB libspdk_event_iscsi.a 00:02:16.353 SO libspdk_event_iscsi.so.6.0 00:02:16.353 SYMLINK libspdk_event_vhost_scsi.so 00:02:16.353 SYMLINK libspdk_event_iscsi.so 00:02:16.353 SO libspdk.so.6.0 00:02:16.353 SYMLINK libspdk.so 00:02:16.926 TEST_HEADER include/spdk/accel.h 00:02:16.926 TEST_HEADER include/spdk/assert.h 00:02:16.926 TEST_HEADER include/spdk/accel_module.h 00:02:16.926 CC app/spdk_lspci/spdk_lspci.o 00:02:16.926 CXX app/trace/trace.o 00:02:16.926 TEST_HEADER include/spdk/base64.h 00:02:16.926 TEST_HEADER include/spdk/barrier.h 00:02:16.926 CC test/rpc_client/rpc_client_test.o 00:02:16.926 TEST_HEADER include/spdk/bdev_module.h 00:02:16.926 TEST_HEADER include/spdk/bdev.h 00:02:16.926 TEST_HEADER include/spdk/bdev_zone.h 00:02:16.926 TEST_HEADER include/spdk/bit_array.h 00:02:16.926 TEST_HEADER include/spdk/blob_bdev.h 00:02:16.926 CC app/spdk_top/spdk_top.o 00:02:16.926 TEST_HEADER include/spdk/bit_pool.h 00:02:16.926 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:16.926 TEST_HEADER include/spdk/blobfs.h 00:02:16.926 TEST_HEADER include/spdk/blob.h 00:02:16.926 TEST_HEADER include/spdk/conf.h 00:02:16.926 TEST_HEADER include/spdk/config.h 00:02:16.926 TEST_HEADER include/spdk/crc16.h 00:02:16.926 TEST_HEADER include/spdk/cpuset.h 00:02:16.926 TEST_HEADER include/spdk/crc32.h 00:02:16.926 TEST_HEADER include/spdk/crc64.h 00:02:16.926 TEST_HEADER include/spdk/dma.h 00:02:16.926 TEST_HEADER include/spdk/dif.h 00:02:16.926 TEST_HEADER include/spdk/endian.h 00:02:16.926 TEST_HEADER include/spdk/event.h 00:02:16.926 TEST_HEADER include/spdk/env.h 00:02:16.926 TEST_HEADER include/spdk/env_dpdk.h 00:02:16.926 TEST_HEADER include/spdk/fd_group.h 00:02:16.926 CC app/spdk_nvme_discover/discovery_aer.o 00:02:16.926 CC app/spdk_nvme_perf/perf.o 00:02:16.926 CC app/trace_record/trace_record.o 00:02:16.926 TEST_HEADER include/spdk/fd.h 00:02:16.926 TEST_HEADER include/spdk/ftl.h 00:02:16.926 TEST_HEADER include/spdk/file.h 00:02:16.926 TEST_HEADER include/spdk/hexlify.h 00:02:16.926 TEST_HEADER include/spdk/gpt_spec.h 00:02:16.926 CC app/spdk_nvme_identify/identify.o 00:02:16.926 TEST_HEADER include/spdk/idxd.h 00:02:16.926 TEST_HEADER include/spdk/histogram_data.h 00:02:16.926 TEST_HEADER include/spdk/init.h 00:02:16.926 TEST_HEADER include/spdk/idxd_spec.h 00:02:16.926 TEST_HEADER include/spdk/ioat.h 00:02:16.926 TEST_HEADER include/spdk/ioat_spec.h 00:02:16.926 TEST_HEADER include/spdk/iscsi_spec.h 00:02:16.926 TEST_HEADER include/spdk/jsonrpc.h 00:02:16.926 TEST_HEADER include/spdk/json.h 00:02:16.926 TEST_HEADER include/spdk/keyring.h 00:02:16.926 TEST_HEADER include/spdk/keyring_module.h 00:02:16.926 TEST_HEADER include/spdk/likely.h 00:02:16.926 TEST_HEADER include/spdk/mmio.h 00:02:16.926 TEST_HEADER include/spdk/lvol.h 00:02:16.926 TEST_HEADER include/spdk/log.h 00:02:16.926 TEST_HEADER include/spdk/memory.h 00:02:16.926 TEST_HEADER include/spdk/nbd.h 00:02:16.926 TEST_HEADER include/spdk/notify.h 00:02:16.926 TEST_HEADER include/spdk/net.h 00:02:16.926 TEST_HEADER include/spdk/nvme.h 00:02:16.926 TEST_HEADER include/spdk/nvme_intel.h 00:02:16.926 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:16.926 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:16.926 TEST_HEADER include/spdk/nvme_spec.h 00:02:16.926 TEST_HEADER include/spdk/nvme_zns.h 00:02:16.926 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:16.926 TEST_HEADER include/spdk/nvmf_spec.h 00:02:16.926 TEST_HEADER include/spdk/nvmf.h 00:02:16.926 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:16.926 TEST_HEADER include/spdk/nvmf_transport.h 00:02:16.926 TEST_HEADER include/spdk/opal.h 00:02:16.926 TEST_HEADER include/spdk/opal_spec.h 00:02:16.926 TEST_HEADER include/spdk/pci_ids.h 00:02:16.926 TEST_HEADER include/spdk/pipe.h 00:02:16.926 TEST_HEADER include/spdk/queue.h 00:02:16.926 TEST_HEADER include/spdk/reduce.h 00:02:16.926 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:16.926 TEST_HEADER include/spdk/rpc.h 00:02:16.926 TEST_HEADER include/spdk/scheduler.h 00:02:16.926 TEST_HEADER include/spdk/scsi.h 00:02:16.926 TEST_HEADER include/spdk/scsi_spec.h 00:02:16.926 TEST_HEADER include/spdk/sock.h 00:02:16.926 CC app/spdk_dd/spdk_dd.o 00:02:16.926 TEST_HEADER include/spdk/stdinc.h 00:02:16.926 TEST_HEADER include/spdk/trace.h 00:02:16.926 TEST_HEADER include/spdk/trace_parser.h 00:02:16.926 TEST_HEADER include/spdk/string.h 00:02:16.926 TEST_HEADER include/spdk/thread.h 00:02:16.926 TEST_HEADER include/spdk/tree.h 00:02:16.926 TEST_HEADER include/spdk/util.h 00:02:16.926 TEST_HEADER include/spdk/uuid.h 00:02:16.926 TEST_HEADER include/spdk/ublk.h 00:02:16.926 CC app/nvmf_tgt/nvmf_main.o 00:02:16.926 TEST_HEADER include/spdk/version.h 00:02:16.926 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:16.926 TEST_HEADER include/spdk/vhost.h 00:02:16.926 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:16.926 TEST_HEADER include/spdk/vmd.h 00:02:16.926 TEST_HEADER include/spdk/xor.h 00:02:16.926 TEST_HEADER include/spdk/zipf.h 00:02:16.926 CXX test/cpp_headers/accel.o 00:02:16.926 CXX test/cpp_headers/accel_module.o 00:02:16.926 CXX test/cpp_headers/assert.o 00:02:16.926 CXX test/cpp_headers/base64.o 00:02:16.926 CXX test/cpp_headers/barrier.o 00:02:16.926 CXX test/cpp_headers/bdev_module.o 00:02:16.926 CC app/iscsi_tgt/iscsi_tgt.o 00:02:16.926 CXX test/cpp_headers/bdev.o 00:02:16.926 CXX test/cpp_headers/bdev_zone.o 00:02:16.926 CXX test/cpp_headers/bit_array.o 00:02:16.926 CXX test/cpp_headers/bit_pool.o 00:02:16.926 CXX test/cpp_headers/blob_bdev.o 00:02:16.926 CXX test/cpp_headers/blobfs_bdev.o 00:02:16.926 CXX test/cpp_headers/blobfs.o 00:02:16.926 CXX test/cpp_headers/blob.o 00:02:16.926 CXX test/cpp_headers/conf.o 00:02:16.926 CXX test/cpp_headers/config.o 00:02:16.926 CXX test/cpp_headers/cpuset.o 00:02:16.926 CXX test/cpp_headers/crc16.o 00:02:16.926 CXX test/cpp_headers/crc32.o 00:02:16.926 CXX test/cpp_headers/dif.o 00:02:16.926 CXX test/cpp_headers/dma.o 00:02:16.926 CXX test/cpp_headers/env_dpdk.o 00:02:16.926 CXX test/cpp_headers/crc64.o 00:02:16.926 CXX test/cpp_headers/endian.o 00:02:16.926 CXX test/cpp_headers/env.o 00:02:16.926 CXX test/cpp_headers/fd_group.o 00:02:16.926 CXX test/cpp_headers/event.o 00:02:16.926 CXX test/cpp_headers/fd.o 00:02:16.926 CXX test/cpp_headers/file.o 00:02:16.926 CXX test/cpp_headers/ftl.o 00:02:16.926 CXX test/cpp_headers/gpt_spec.o 00:02:16.926 CXX test/cpp_headers/histogram_data.o 00:02:16.926 CC app/spdk_tgt/spdk_tgt.o 00:02:16.926 CXX test/cpp_headers/hexlify.o 00:02:16.926 CXX test/cpp_headers/idxd_spec.o 00:02:16.926 CXX test/cpp_headers/idxd.o 00:02:16.926 CXX test/cpp_headers/init.o 00:02:16.926 CXX test/cpp_headers/ioat_spec.o 00:02:16.926 CXX test/cpp_headers/ioat.o 00:02:16.926 CXX test/cpp_headers/jsonrpc.o 00:02:16.926 CXX test/cpp_headers/iscsi_spec.o 00:02:16.926 CXX test/cpp_headers/keyring_module.o 00:02:16.926 CXX test/cpp_headers/json.o 00:02:16.926 CXX test/cpp_headers/keyring.o 00:02:16.926 CXX test/cpp_headers/log.o 00:02:16.926 CXX test/cpp_headers/likely.o 00:02:16.926 CXX test/cpp_headers/lvol.o 00:02:16.926 CXX test/cpp_headers/memory.o 00:02:16.926 CXX test/cpp_headers/nbd.o 00:02:16.926 CXX test/cpp_headers/mmio.o 00:02:16.926 CXX test/cpp_headers/net.o 00:02:16.926 CXX test/cpp_headers/notify.o 00:02:16.926 CXX test/cpp_headers/nvme.o 00:02:16.926 CXX test/cpp_headers/nvme_intel.o 00:02:16.926 CXX test/cpp_headers/nvme_ocssd.o 00:02:16.926 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:16.926 CXX test/cpp_headers/nvme_zns.o 00:02:16.926 CXX test/cpp_headers/nvme_spec.o 00:02:16.926 CXX test/cpp_headers/nvmf_cmd.o 00:02:16.926 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:16.926 CXX test/cpp_headers/nvmf.o 00:02:16.926 CXX test/cpp_headers/nvmf_transport.o 00:02:16.926 CXX test/cpp_headers/nvmf_spec.o 00:02:16.926 CXX test/cpp_headers/opal.o 00:02:16.926 CXX test/cpp_headers/opal_spec.o 00:02:16.926 CXX test/cpp_headers/pipe.o 00:02:16.926 CXX test/cpp_headers/pci_ids.o 00:02:16.926 CXX test/cpp_headers/queue.o 00:02:16.926 CC test/thread/poller_perf/poller_perf.o 00:02:16.926 CC test/app/jsoncat/jsoncat.o 00:02:16.926 CC test/env/pci/pci_ut.o 00:02:16.926 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:16.926 CXX test/cpp_headers/reduce.o 00:02:16.926 CC test/app/histogram_perf/histogram_perf.o 00:02:16.926 CC test/app/stub/stub.o 00:02:16.926 CC examples/util/zipf/zipf.o 00:02:16.926 CC test/env/memory/memory_ut.o 00:02:16.926 CC app/fio/nvme/fio_plugin.o 00:02:16.926 CC examples/ioat/verify/verify.o 00:02:16.926 CC test/env/vtophys/vtophys.o 00:02:16.926 CC examples/ioat/perf/perf.o 00:02:16.926 CC test/app/bdev_svc/bdev_svc.o 00:02:17.195 CC test/dma/test_dma/test_dma.o 00:02:17.195 LINK spdk_lspci 00:02:17.195 CC app/fio/bdev/fio_plugin.o 00:02:17.459 LINK spdk_nvme_discover 00:02:17.459 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:17.459 CC test/env/mem_callbacks/mem_callbacks.o 00:02:17.459 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:17.459 LINK rpc_client_test 00:02:17.459 LINK histogram_perf 00:02:17.459 LINK jsoncat 00:02:17.459 LINK zipf 00:02:17.459 CXX test/cpp_headers/rpc.o 00:02:17.459 LINK env_dpdk_post_init 00:02:17.459 CXX test/cpp_headers/scheduler.o 00:02:17.459 CXX test/cpp_headers/scsi.o 00:02:17.459 CXX test/cpp_headers/scsi_spec.o 00:02:17.459 CXX test/cpp_headers/sock.o 00:02:17.459 CXX test/cpp_headers/stdinc.o 00:02:17.459 CXX test/cpp_headers/string.o 00:02:17.459 CXX test/cpp_headers/thread.o 00:02:17.459 CXX test/cpp_headers/trace.o 00:02:17.459 LINK interrupt_tgt 00:02:17.459 CXX test/cpp_headers/trace_parser.o 00:02:17.459 CXX test/cpp_headers/tree.o 00:02:17.459 CXX test/cpp_headers/ublk.o 00:02:17.459 CXX test/cpp_headers/util.o 00:02:17.459 CXX test/cpp_headers/uuid.o 00:02:17.459 LINK stub 00:02:17.459 LINK nvmf_tgt 00:02:17.459 LINK poller_perf 00:02:17.459 CXX test/cpp_headers/version.o 00:02:17.459 CXX test/cpp_headers/vfio_user_pci.o 00:02:17.459 CXX test/cpp_headers/vfio_user_spec.o 00:02:17.459 CXX test/cpp_headers/vhost.o 00:02:17.459 CXX test/cpp_headers/vmd.o 00:02:17.459 CXX test/cpp_headers/xor.o 00:02:17.459 CXX test/cpp_headers/zipf.o 00:02:17.715 LINK verify 00:02:17.715 LINK iscsi_tgt 00:02:17.715 LINK spdk_trace_record 00:02:17.715 LINK vtophys 00:02:17.715 LINK spdk_tgt 00:02:17.715 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:17.715 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:17.715 LINK bdev_svc 00:02:17.715 LINK ioat_perf 00:02:17.715 LINK pci_ut 00:02:17.715 LINK spdk_dd 00:02:17.715 LINK spdk_trace 00:02:17.971 LINK test_dma 00:02:17.971 LINK spdk_nvme 00:02:17.972 CC examples/sock/hello_world/hello_sock.o 00:02:17.972 CC examples/vmd/led/led.o 00:02:17.972 CC examples/vmd/lsvmd/lsvmd.o 00:02:17.972 CC examples/idxd/perf/perf.o 00:02:17.972 LINK spdk_bdev 00:02:17.972 LINK spdk_nvme_identify 00:02:17.972 CC examples/thread/thread/thread_ex.o 00:02:17.972 LINK nvme_fuzz 00:02:17.972 CC test/event/event_perf/event_perf.o 00:02:17.972 CC test/event/reactor_perf/reactor_perf.o 00:02:17.972 CC test/event/reactor/reactor.o 00:02:17.972 LINK vhost_fuzz 00:02:17.972 CC test/event/app_repeat/app_repeat.o 00:02:17.972 LINK spdk_top 00:02:17.972 CC test/event/scheduler/scheduler.o 00:02:18.230 LINK spdk_nvme_perf 00:02:18.230 LINK lsvmd 00:02:18.230 LINK mem_callbacks 00:02:18.230 LINK led 00:02:18.230 CC app/vhost/vhost.o 00:02:18.230 LINK hello_sock 00:02:18.230 LINK reactor 00:02:18.230 LINK reactor_perf 00:02:18.230 LINK event_perf 00:02:18.230 LINK app_repeat 00:02:18.230 LINK thread 00:02:18.230 LINK idxd_perf 00:02:18.230 CC test/nvme/reserve/reserve.o 00:02:18.230 LINK scheduler 00:02:18.230 CC test/nvme/connect_stress/connect_stress.o 00:02:18.230 CC test/nvme/fused_ordering/fused_ordering.o 00:02:18.230 CC test/nvme/fdp/fdp.o 00:02:18.230 CC test/nvme/err_injection/err_injection.o 00:02:18.230 CC test/nvme/sgl/sgl.o 00:02:18.230 CC test/nvme/cuse/cuse.o 00:02:18.230 CC test/nvme/startup/startup.o 00:02:18.230 CC test/nvme/reset/reset.o 00:02:18.230 CC test/nvme/boot_partition/boot_partition.o 00:02:18.230 CC test/nvme/e2edp/nvme_dp.o 00:02:18.230 CC test/nvme/overhead/overhead.o 00:02:18.230 CC test/nvme/compliance/nvme_compliance.o 00:02:18.230 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:18.230 CC test/nvme/aer/aer.o 00:02:18.230 CC test/blobfs/mkfs/mkfs.o 00:02:18.230 CC test/nvme/simple_copy/simple_copy.o 00:02:18.487 CC test/accel/dif/dif.o 00:02:18.487 LINK vhost 00:02:18.487 CC test/lvol/esnap/esnap.o 00:02:18.487 LINK memory_ut 00:02:18.487 LINK connect_stress 00:02:18.487 LINK boot_partition 00:02:18.487 LINK startup 00:02:18.487 LINK doorbell_aers 00:02:18.487 LINK fused_ordering 00:02:18.487 LINK err_injection 00:02:18.487 LINK mkfs 00:02:18.487 LINK reserve 00:02:18.487 LINK simple_copy 00:02:18.487 LINK reset 00:02:18.487 LINK sgl 00:02:18.487 LINK nvme_dp 00:02:18.744 LINK overhead 00:02:18.744 LINK aer 00:02:18.744 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:18.744 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:18.744 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:18.744 CC examples/nvme/hotplug/hotplug.o 00:02:18.744 LINK nvme_compliance 00:02:18.744 CC examples/nvme/reconnect/reconnect.o 00:02:18.744 CC examples/nvme/abort/abort.o 00:02:18.744 CC examples/nvme/hello_world/hello_world.o 00:02:18.744 LINK fdp 00:02:18.744 CC examples/nvme/arbitration/arbitration.o 00:02:18.744 CC examples/accel/perf/accel_perf.o 00:02:18.744 CC examples/blob/hello_world/hello_blob.o 00:02:18.744 LINK dif 00:02:18.744 CC examples/blob/cli/blobcli.o 00:02:18.744 LINK pmr_persistence 00:02:18.744 LINK cmb_copy 00:02:18.744 LINK hotplug 00:02:19.003 LINK hello_world 00:02:19.003 LINK arbitration 00:02:19.003 LINK reconnect 00:02:19.003 LINK abort 00:02:19.003 LINK iscsi_fuzz 00:02:19.003 LINK hello_blob 00:02:19.003 LINK nvme_manage 00:02:19.003 LINK accel_perf 00:02:19.261 LINK blobcli 00:02:19.261 CC test/bdev/bdevio/bdevio.o 00:02:19.261 LINK cuse 00:02:19.519 CC examples/bdev/hello_world/hello_bdev.o 00:02:19.519 CC examples/bdev/bdevperf/bdevperf.o 00:02:19.519 LINK bdevio 00:02:19.777 LINK hello_bdev 00:02:20.035 LINK bdevperf 00:02:20.602 CC examples/nvmf/nvmf/nvmf.o 00:02:20.860 LINK nvmf 00:02:21.858 LINK esnap 00:02:22.117 00:02:22.117 real 0m43.596s 00:02:22.117 user 6m46.796s 00:02:22.117 sys 3m27.643s 00:02:22.117 01:07:47 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:22.117 01:07:47 make -- common/autotest_common.sh@10 -- $ set +x 00:02:22.117 ************************************ 00:02:22.117 END TEST make 00:02:22.117 ************************************ 00:02:22.117 01:07:47 -- common/autotest_common.sh@1142 -- $ return 0 00:02:22.117 01:07:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:22.117 01:07:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:22.117 01:07:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:22.117 01:07:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.117 01:07:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:22.117 01:07:47 -- pm/common@44 -- $ pid=3096979 00:02:22.117 01:07:47 -- pm/common@50 -- $ kill -TERM 3096979 00:02:22.117 01:07:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.117 01:07:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:22.117 01:07:47 -- pm/common@44 -- $ pid=3096981 00:02:22.117 01:07:47 -- pm/common@50 -- $ kill -TERM 3096981 00:02:22.117 01:07:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.117 01:07:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:22.117 01:07:47 -- pm/common@44 -- $ pid=3096983 00:02:22.117 01:07:47 -- pm/common@50 -- $ kill -TERM 3096983 00:02:22.117 01:07:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.117 01:07:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:22.117 01:07:47 -- pm/common@44 -- $ pid=3097011 00:02:22.117 01:07:47 -- pm/common@50 -- $ sudo -E kill -TERM 3097011 00:02:22.117 01:07:48 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:22.117 01:07:48 -- nvmf/common.sh@7 -- # uname -s 00:02:22.377 01:07:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:22.377 01:07:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:22.377 01:07:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:22.377 01:07:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:22.377 01:07:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:22.377 01:07:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:22.377 01:07:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:22.377 01:07:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:22.377 01:07:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:22.377 01:07:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:22.377 01:07:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:02:22.377 01:07:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:02:22.377 01:07:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:22.377 01:07:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:22.377 01:07:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:22.377 01:07:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:22.377 01:07:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:22.377 01:07:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:22.377 01:07:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:22.377 01:07:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:22.377 01:07:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.377 01:07:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.377 01:07:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.377 01:07:48 -- paths/export.sh@5 -- # export PATH 00:02:22.377 01:07:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.377 01:07:48 -- nvmf/common.sh@47 -- # : 0 00:02:22.377 01:07:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:22.377 01:07:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:22.377 01:07:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:22.377 01:07:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:22.377 01:07:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:22.377 01:07:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:22.377 01:07:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:22.377 01:07:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:22.377 01:07:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:22.377 01:07:48 -- spdk/autotest.sh@32 -- # uname -s 00:02:22.377 01:07:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:22.377 01:07:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:22.377 01:07:48 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:22.377 01:07:48 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:22.377 01:07:48 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:22.377 01:07:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:22.377 01:07:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:22.377 01:07:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:22.377 01:07:48 -- spdk/autotest.sh@48 -- # udevadm_pid=3155989 00:02:22.377 01:07:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:22.377 01:07:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:22.377 01:07:48 -- pm/common@17 -- # local monitor 00:02:22.377 01:07:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.377 01:07:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.377 01:07:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.377 01:07:48 -- pm/common@21 -- # date +%s 00:02:22.377 01:07:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.377 01:07:48 -- pm/common@21 -- # date +%s 00:02:22.377 01:07:48 -- pm/common@25 -- # sleep 1 00:02:22.377 01:07:48 -- pm/common@21 -- # date +%s 00:02:22.377 01:07:48 -- pm/common@21 -- # date +%s 00:02:22.377 01:07:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721084868 00:02:22.377 01:07:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721084868 00:02:22.377 01:07:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721084868 00:02:22.377 01:07:48 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721084868 00:02:22.377 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721084868_collect-vmstat.pm.log 00:02:22.377 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721084868_collect-cpu-load.pm.log 00:02:22.377 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721084868_collect-cpu-temp.pm.log 00:02:22.377 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721084868_collect-bmc-pm.bmc.pm.log 00:02:23.313 01:07:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:23.313 01:07:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:23.313 01:07:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:23.313 01:07:49 -- common/autotest_common.sh@10 -- # set +x 00:02:23.313 01:07:49 -- spdk/autotest.sh@59 -- # create_test_list 00:02:23.313 01:07:49 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:23.313 01:07:49 -- common/autotest_common.sh@10 -- # set +x 00:02:23.313 01:07:49 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:23.313 01:07:49 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.313 01:07:49 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.313 01:07:49 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:23.314 01:07:49 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:23.314 01:07:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:23.314 01:07:49 -- common/autotest_common.sh@1455 -- # uname 00:02:23.314 01:07:49 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:23.314 01:07:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:23.314 01:07:49 -- common/autotest_common.sh@1475 -- # uname 00:02:23.314 01:07:49 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:23.314 01:07:49 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:23.314 01:07:49 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:23.314 01:07:49 -- spdk/autotest.sh@72 -- # hash lcov 00:02:23.314 01:07:49 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:23.314 01:07:49 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:23.314 --rc lcov_branch_coverage=1 00:02:23.314 --rc lcov_function_coverage=1 00:02:23.314 --rc genhtml_branch_coverage=1 00:02:23.314 --rc genhtml_function_coverage=1 00:02:23.314 --rc genhtml_legend=1 00:02:23.314 --rc geninfo_all_blocks=1 00:02:23.314 ' 00:02:23.314 01:07:49 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:23.314 --rc lcov_branch_coverage=1 00:02:23.314 --rc lcov_function_coverage=1 00:02:23.314 --rc genhtml_branch_coverage=1 00:02:23.314 --rc genhtml_function_coverage=1 00:02:23.314 --rc genhtml_legend=1 00:02:23.314 --rc geninfo_all_blocks=1 00:02:23.314 ' 00:02:23.314 01:07:49 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:23.314 --rc lcov_branch_coverage=1 00:02:23.314 --rc lcov_function_coverage=1 00:02:23.314 --rc genhtml_branch_coverage=1 00:02:23.314 --rc genhtml_function_coverage=1 00:02:23.314 --rc genhtml_legend=1 00:02:23.314 --rc geninfo_all_blocks=1 00:02:23.314 --no-external' 00:02:23.314 01:07:49 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:23.314 --rc lcov_branch_coverage=1 00:02:23.314 --rc lcov_function_coverage=1 00:02:23.314 --rc genhtml_branch_coverage=1 00:02:23.314 --rc genhtml_function_coverage=1 00:02:23.314 --rc genhtml_legend=1 00:02:23.314 --rc geninfo_all_blocks=1 00:02:23.314 --no-external' 00:02:23.314 01:07:49 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:23.314 lcov: LCOV version 1.14 00:02:23.314 01:07:49 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:24.687 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:24.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:24.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:24.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:24.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:24.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:24.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:24.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:24.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:24.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:24.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:24.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:24.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:24.688 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:24.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:24.947 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:25.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:25.206 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:25.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:25.206 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:25.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:25.206 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:35.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:35.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:47.356 01:08:11 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:47.356 01:08:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:47.356 01:08:11 -- common/autotest_common.sh@10 -- # set +x 00:02:47.356 01:08:11 -- spdk/autotest.sh@91 -- # rm -f 00:02:47.356 01:08:11 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:48.290 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:48.290 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:48.290 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:48.290 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:48.290 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:48.290 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:48.290 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:48.290 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:48.290 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:48.547 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:48.547 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:48.547 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:48.547 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:48.547 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:48.547 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:48.547 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:48.547 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:48.547 01:08:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:48.547 01:08:14 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:48.547 01:08:14 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:48.547 01:08:14 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:48.547 01:08:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:48.547 01:08:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:48.547 01:08:14 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:48.547 01:08:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:48.547 01:08:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:48.547 01:08:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:48.547 01:08:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:48.547 01:08:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:48.547 01:08:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:48.547 01:08:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:48.547 01:08:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:48.547 No valid GPT data, bailing 00:02:48.547 01:08:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:48.547 01:08:14 -- scripts/common.sh@391 -- # pt= 00:02:48.547 01:08:14 -- scripts/common.sh@392 -- # return 1 00:02:48.547 01:08:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:48.805 1+0 records in 00:02:48.805 1+0 records out 00:02:48.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412928 s, 254 MB/s 00:02:48.805 01:08:14 -- spdk/autotest.sh@118 -- # sync 00:02:48.805 01:08:14 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:48.805 01:08:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:48.805 01:08:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:54.075 01:08:19 -- spdk/autotest.sh@124 -- # uname -s 00:02:54.075 01:08:19 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:54.075 01:08:19 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:54.075 01:08:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:54.075 01:08:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.075 01:08:19 -- common/autotest_common.sh@10 -- # set +x 00:02:54.075 ************************************ 00:02:54.075 START TEST setup.sh 00:02:54.075 ************************************ 00:02:54.075 01:08:19 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:54.075 * Looking for test storage... 00:02:54.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:54.075 01:08:19 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:54.075 01:08:19 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:54.075 01:08:19 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:54.075 01:08:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:54.075 01:08:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:54.075 01:08:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:54.075 ************************************ 00:02:54.075 START TEST acl 00:02:54.075 ************************************ 00:02:54.075 01:08:19 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:54.075 * Looking for test storage... 00:02:54.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:54.075 01:08:19 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:54.075 01:08:19 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:54.075 01:08:19 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:54.075 01:08:19 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:54.075 01:08:19 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:54.075 01:08:19 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:54.075 01:08:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:54.075 01:08:19 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:54.075 01:08:19 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:54.075 01:08:19 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:54.075 01:08:19 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:54.075 01:08:19 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:54.075 01:08:19 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:54.075 01:08:19 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:54.075 01:08:19 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.075 01:08:19 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.607 01:08:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:56.607 01:08:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:56.607 01:08:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.607 01:08:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:56.607 01:08:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.607 01:08:22 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:59.142 Hugepages 00:02:59.142 node hugesize free / total 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.142 00:02:59.142 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.142 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:59.401 01:08:25 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:59.401 01:08:25 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.401 01:08:25 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.401 01:08:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:59.401 ************************************ 00:02:59.402 START TEST denied 00:02:59.402 ************************************ 00:02:59.402 01:08:25 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:59.402 01:08:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:59.402 01:08:25 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:59.402 01:08:25 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:59.402 01:08:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.402 01:08:25 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:02.687 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:02.687 01:08:28 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:02.687 01:08:28 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:02.687 01:08:28 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:02.687 01:08:28 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:02.687 01:08:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:02.687 01:08:28 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:02.687 01:08:28 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:02.687 01:08:28 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:02.687 01:08:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.687 01:08:28 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.877 00:03:06.877 real 0m6.965s 00:03:06.877 user 0m2.339s 00:03:06.877 sys 0m3.946s 00:03:06.877 01:08:32 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.877 01:08:32 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:06.877 ************************************ 00:03:06.877 END TEST denied 00:03:06.877 ************************************ 00:03:06.877 01:08:32 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:06.877 01:08:32 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:06.877 01:08:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.877 01:08:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.877 01:08:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:06.877 ************************************ 00:03:06.877 START TEST allowed 00:03:06.877 ************************************ 00:03:06.877 01:08:32 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:06.877 01:08:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:06.877 01:08:32 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:06.877 01:08:32 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:06.877 01:08:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.877 01:08:32 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.065 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.065 01:08:36 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:11.065 01:08:36 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:11.065 01:08:36 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:11.065 01:08:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.065 01:08:36 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.426 00:03:14.426 real 0m7.393s 00:03:14.426 user 0m2.084s 00:03:14.426 sys 0m3.820s 00:03:14.426 01:08:39 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.426 01:08:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:14.426 ************************************ 00:03:14.426 END TEST allowed 00:03:14.426 ************************************ 00:03:14.426 01:08:39 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:14.426 00:03:14.426 real 0m20.197s 00:03:14.426 user 0m6.612s 00:03:14.426 sys 0m11.572s 00:03:14.426 01:08:39 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.426 01:08:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:14.426 ************************************ 00:03:14.426 END TEST acl 00:03:14.426 ************************************ 00:03:14.426 01:08:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:14.426 01:08:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:14.426 01:08:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.426 01:08:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.426 01:08:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:14.426 ************************************ 00:03:14.426 START TEST hugepages 00:03:14.426 ************************************ 00:03:14.426 01:08:39 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:14.426 * Looking for test storage... 00:03:14.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 173910164 kB' 'MemAvailable: 176779460 kB' 'Buffers: 4132 kB' 'Cached: 9625580 kB' 'SwapCached: 0 kB' 'Active: 6633012 kB' 'Inactive: 3509512 kB' 'Active(anon): 6242336 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516156 kB' 'Mapped: 192600 kB' 'Shmem: 5729524 kB' 'KReclaimable: 227116 kB' 'Slab: 796456 kB' 'SReclaimable: 227116 kB' 'SUnreclaim: 569340 kB' 'KernelStack: 20656 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982040 kB' 'Committed_AS: 7758840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.426 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:14.427 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:14.428 01:08:39 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:14.428 01:08:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.428 01:08:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.428 01:08:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:14.428 ************************************ 00:03:14.428 START TEST default_setup 00:03:14.428 ************************************ 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.428 01:08:39 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.329 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:16.329 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:17.703 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:17.703 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:17.703 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:17.703 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.703 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.703 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:17.703 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:17.703 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176061900 kB' 'MemAvailable: 178931116 kB' 'Buffers: 4132 kB' 'Cached: 9625684 kB' 'SwapCached: 0 kB' 'Active: 6654636 kB' 'Inactive: 3509512 kB' 'Active(anon): 6263960 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537276 kB' 'Mapped: 192788 kB' 'Shmem: 5729628 kB' 'KReclaimable: 226956 kB' 'Slab: 794996 kB' 'SReclaimable: 226956 kB' 'SUnreclaim: 568040 kB' 'KernelStack: 20768 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7782316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.966 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176061600 kB' 'MemAvailable: 178930816 kB' 'Buffers: 4132 kB' 'Cached: 9625688 kB' 'SwapCached: 0 kB' 'Active: 6654156 kB' 'Inactive: 3509512 kB' 'Active(anon): 6263480 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536700 kB' 'Mapped: 192784 kB' 'Shmem: 5729632 kB' 'KReclaimable: 226956 kB' 'Slab: 794972 kB' 'SReclaimable: 226956 kB' 'SUnreclaim: 568016 kB' 'KernelStack: 20864 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7782336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.967 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176061780 kB' 'MemAvailable: 178930996 kB' 'Buffers: 4132 kB' 'Cached: 9625704 kB' 'SwapCached: 0 kB' 'Active: 6653816 kB' 'Inactive: 3509512 kB' 'Active(anon): 6263140 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536804 kB' 'Mapped: 192704 kB' 'Shmem: 5729648 kB' 'KReclaimable: 226956 kB' 'Slab: 794924 kB' 'SReclaimable: 226956 kB' 'SUnreclaim: 567968 kB' 'KernelStack: 20848 kB' 'PageTables: 9272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7782356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315740 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.968 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:17.969 nr_hugepages=1024 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.969 resv_hugepages=0 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.969 surplus_hugepages=0 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.969 anon_hugepages=0 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176061412 kB' 'MemAvailable: 178930628 kB' 'Buffers: 4132 kB' 'Cached: 9625728 kB' 'SwapCached: 0 kB' 'Active: 6653024 kB' 'Inactive: 3509512 kB' 'Active(anon): 6262348 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535964 kB' 'Mapped: 192704 kB' 'Shmem: 5729672 kB' 'KReclaimable: 226956 kB' 'Slab: 794924 kB' 'SReclaimable: 226956 kB' 'SUnreclaim: 567968 kB' 'KernelStack: 20752 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7780888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315644 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.969 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 92574328 kB' 'MemUsed: 5088356 kB' 'SwapCached: 0 kB' 'Active: 1811296 kB' 'Inactive: 149892 kB' 'Active(anon): 1593268 kB' 'Inactive(anon): 0 kB' 'Active(file): 218028 kB' 'Inactive(file): 149892 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1577088 kB' 'Mapped: 106528 kB' 'AnonPages: 387308 kB' 'Shmem: 1209168 kB' 'KernelStack: 11560 kB' 'PageTables: 5788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102132 kB' 'Slab: 364676 kB' 'SReclaimable: 102132 kB' 'SUnreclaim: 262544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.970 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:17.971 node0=1024 expecting 1024 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:17.971 00:03:17.971 real 0m3.882s 00:03:17.971 user 0m0.833s 00:03:17.971 sys 0m1.429s 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:17.971 01:08:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:17.971 ************************************ 00:03:17.971 END TEST default_setup 00:03:17.971 ************************************ 00:03:17.971 01:08:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:17.971 01:08:43 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:17.971 01:08:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:17.971 01:08:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.971 01:08:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.971 ************************************ 00:03:17.971 START TEST per_node_1G_alloc 00:03:17.971 ************************************ 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.971 01:08:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.499 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.499 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.499 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.499 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176062756 kB' 'MemAvailable: 178931972 kB' 'Buffers: 4132 kB' 'Cached: 9625820 kB' 'SwapCached: 0 kB' 'Active: 6654696 kB' 'Inactive: 3509512 kB' 'Active(anon): 6264020 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536904 kB' 'Mapped: 192776 kB' 'Shmem: 5729764 kB' 'KReclaimable: 226956 kB' 'Slab: 795224 kB' 'SReclaimable: 226956 kB' 'SUnreclaim: 568268 kB' 'KernelStack: 20784 kB' 'PageTables: 9328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7781408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315868 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.766 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.767 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176060908 kB' 'MemAvailable: 178930124 kB' 'Buffers: 4132 kB' 'Cached: 9625824 kB' 'SwapCached: 0 kB' 'Active: 6654616 kB' 'Inactive: 3509512 kB' 'Active(anon): 6263940 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537900 kB' 'Mapped: 192700 kB' 'Shmem: 5729768 kB' 'KReclaimable: 226956 kB' 'Slab: 795156 kB' 'SReclaimable: 226956 kB' 'SUnreclaim: 568200 kB' 'KernelStack: 21008 kB' 'PageTables: 9616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7782868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315836 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.768 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176064080 kB' 'MemAvailable: 178933296 kB' 'Buffers: 4132 kB' 'Cached: 9625840 kB' 'SwapCached: 0 kB' 'Active: 6654824 kB' 'Inactive: 3509512 kB' 'Active(anon): 6264148 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537592 kB' 'Mapped: 192700 kB' 'Shmem: 5729784 kB' 'KReclaimable: 226956 kB' 'Slab: 795124 kB' 'SReclaimable: 226956 kB' 'SUnreclaim: 568168 kB' 'KernelStack: 21072 kB' 'PageTables: 9664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7782892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315852 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.769 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.770 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.771 nr_hugepages=1024 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.771 resv_hugepages=0 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.771 surplus_hugepages=0 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.771 anon_hugepages=0 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.771 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176068828 kB' 'MemAvailable: 178938044 kB' 'Buffers: 4132 kB' 'Cached: 9625860 kB' 'SwapCached: 0 kB' 'Active: 6654092 kB' 'Inactive: 3509512 kB' 'Active(anon): 6263416 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536816 kB' 'Mapped: 192700 kB' 'Shmem: 5729804 kB' 'KReclaimable: 226956 kB' 'Slab: 795028 kB' 'SReclaimable: 226956 kB' 'SUnreclaim: 568072 kB' 'KernelStack: 20992 kB' 'PageTables: 9780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7782912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315852 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.772 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.773 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 93628480 kB' 'MemUsed: 4034204 kB' 'SwapCached: 0 kB' 'Active: 1811188 kB' 'Inactive: 149892 kB' 'Active(anon): 1593160 kB' 'Inactive(anon): 0 kB' 'Active(file): 218028 kB' 'Inactive(file): 149892 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1577192 kB' 'Mapped: 106532 kB' 'AnonPages: 387064 kB' 'Shmem: 1209272 kB' 'KernelStack: 11656 kB' 'PageTables: 6032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102132 kB' 'Slab: 364956 kB' 'SReclaimable: 102132 kB' 'SUnreclaim: 262824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.774 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718496 kB' 'MemFree: 82443056 kB' 'MemUsed: 11275440 kB' 'SwapCached: 0 kB' 'Active: 4842900 kB' 'Inactive: 3359620 kB' 'Active(anon): 4670252 kB' 'Inactive(anon): 0 kB' 'Active(file): 172648 kB' 'Inactive(file): 3359620 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8052804 kB' 'Mapped: 86168 kB' 'AnonPages: 149776 kB' 'Shmem: 4520536 kB' 'KernelStack: 9064 kB' 'PageTables: 2672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124824 kB' 'Slab: 430008 kB' 'SReclaimable: 124824 kB' 'SUnreclaim: 305184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.775 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:20.776 node0=512 expecting 512 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:20.776 node1=512 expecting 512 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:20.776 00:03:20.776 real 0m2.792s 00:03:20.776 user 0m1.150s 00:03:20.776 sys 0m1.679s 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.776 01:08:46 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.776 ************************************ 00:03:20.776 END TEST per_node_1G_alloc 00:03:20.776 ************************************ 00:03:20.776 01:08:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:20.776 01:08:46 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:20.776 01:08:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.776 01:08:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.776 01:08:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:21.036 ************************************ 00:03:21.036 START TEST even_2G_alloc 00:03:21.036 ************************************ 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.036 01:08:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.571 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.571 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.571 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176052664 kB' 'MemAvailable: 178921864 kB' 'Buffers: 4132 kB' 'Cached: 9625972 kB' 'SwapCached: 0 kB' 'Active: 6651780 kB' 'Inactive: 3509512 kB' 'Active(anon): 6261104 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533784 kB' 'Mapped: 191468 kB' 'Shmem: 5729916 kB' 'KReclaimable: 226924 kB' 'Slab: 794856 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567932 kB' 'KernelStack: 20800 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7766928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315740 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.571 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.572 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176053704 kB' 'MemAvailable: 178922904 kB' 'Buffers: 4132 kB' 'Cached: 9625976 kB' 'SwapCached: 0 kB' 'Active: 6650944 kB' 'Inactive: 3509512 kB' 'Active(anon): 6260268 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533560 kB' 'Mapped: 191444 kB' 'Shmem: 5729920 kB' 'KReclaimable: 226924 kB' 'Slab: 794900 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567976 kB' 'KernelStack: 20816 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7766944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.573 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.574 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.575 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176053704 kB' 'MemAvailable: 178922904 kB' 'Buffers: 4132 kB' 'Cached: 9625992 kB' 'SwapCached: 0 kB' 'Active: 6650960 kB' 'Inactive: 3509512 kB' 'Active(anon): 6260284 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533564 kB' 'Mapped: 191444 kB' 'Shmem: 5729936 kB' 'KReclaimable: 226924 kB' 'Slab: 794900 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567976 kB' 'KernelStack: 20816 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7766964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.576 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.577 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.578 nr_hugepages=1024 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.578 resv_hugepages=0 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.578 surplus_hugepages=0 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.578 anon_hugepages=0 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176054080 kB' 'MemAvailable: 178923280 kB' 'Buffers: 4132 kB' 'Cached: 9626032 kB' 'SwapCached: 0 kB' 'Active: 6650652 kB' 'Inactive: 3509512 kB' 'Active(anon): 6259976 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533192 kB' 'Mapped: 191444 kB' 'Shmem: 5729976 kB' 'KReclaimable: 226924 kB' 'Slab: 794900 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567976 kB' 'KernelStack: 20800 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7766988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315676 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:23.578 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.838 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.839 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 93615448 kB' 'MemUsed: 4047236 kB' 'SwapCached: 0 kB' 'Active: 1808548 kB' 'Inactive: 149892 kB' 'Active(anon): 1590520 kB' 'Inactive(anon): 0 kB' 'Active(file): 218028 kB' 'Inactive(file): 149892 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1577272 kB' 'Mapped: 105548 kB' 'AnonPages: 384264 kB' 'Shmem: 1209352 kB' 'KernelStack: 11624 kB' 'PageTables: 5816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102132 kB' 'Slab: 364892 kB' 'SReclaimable: 102132 kB' 'SUnreclaim: 262760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.840 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718496 kB' 'MemFree: 82444268 kB' 'MemUsed: 11274228 kB' 'SwapCached: 0 kB' 'Active: 4842316 kB' 'Inactive: 3359620 kB' 'Active(anon): 4669668 kB' 'Inactive(anon): 0 kB' 'Active(file): 172648 kB' 'Inactive(file): 3359620 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8052896 kB' 'Mapped: 85896 kB' 'AnonPages: 149172 kB' 'Shmem: 4520628 kB' 'KernelStack: 9144 kB' 'PageTables: 2980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124792 kB' 'Slab: 430008 kB' 'SReclaimable: 124792 kB' 'SUnreclaim: 305216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.841 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.842 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.843 node0=512 expecting 512 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:23.843 node1=512 expecting 512 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:23.843 00:03:23.843 real 0m2.868s 00:03:23.843 user 0m1.182s 00:03:23.843 sys 0m1.741s 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.843 01:08:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.843 ************************************ 00:03:23.843 END TEST even_2G_alloc 00:03:23.843 ************************************ 00:03:23.843 01:08:49 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:23.843 01:08:49 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:23.843 01:08:49 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:23.843 01:08:49 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.843 01:08:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.843 ************************************ 00:03:23.843 START TEST odd_alloc 00:03:23.843 ************************************ 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.843 01:08:49 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.375 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.375 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.375 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176065416 kB' 'MemAvailable: 178934616 kB' 'Buffers: 4132 kB' 'Cached: 9626128 kB' 'SwapCached: 0 kB' 'Active: 6652136 kB' 'Inactive: 3509512 kB' 'Active(anon): 6261460 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534624 kB' 'Mapped: 191616 kB' 'Shmem: 5730072 kB' 'KReclaimable: 226924 kB' 'Slab: 794612 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567688 kB' 'KernelStack: 20864 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 7770252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315836 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.375 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.376 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176064032 kB' 'MemAvailable: 178933232 kB' 'Buffers: 4132 kB' 'Cached: 9626132 kB' 'SwapCached: 0 kB' 'Active: 6652112 kB' 'Inactive: 3509512 kB' 'Active(anon): 6261436 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534616 kB' 'Mapped: 191552 kB' 'Shmem: 5730076 kB' 'KReclaimable: 226924 kB' 'Slab: 794672 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567748 kB' 'KernelStack: 20896 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 7770268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315868 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.377 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.378 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176063752 kB' 'MemAvailable: 178932952 kB' 'Buffers: 4132 kB' 'Cached: 9626148 kB' 'SwapCached: 0 kB' 'Active: 6650812 kB' 'Inactive: 3509512 kB' 'Active(anon): 6260136 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533304 kB' 'Mapped: 191476 kB' 'Shmem: 5730092 kB' 'KReclaimable: 226924 kB' 'Slab: 794588 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567664 kB' 'KernelStack: 20848 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 7770288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315900 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.379 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:26.380 nr_hugepages=1025 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.380 resv_hugepages=0 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.380 surplus_hugepages=0 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.380 anon_hugepages=0 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.380 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176062476 kB' 'MemAvailable: 178931676 kB' 'Buffers: 4132 kB' 'Cached: 9626168 kB' 'SwapCached: 0 kB' 'Active: 6650888 kB' 'Inactive: 3509512 kB' 'Active(anon): 6260212 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533348 kB' 'Mapped: 191476 kB' 'Shmem: 5730112 kB' 'KReclaimable: 226924 kB' 'Slab: 794588 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567664 kB' 'KernelStack: 20752 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 7770308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315852 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.381 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 93628960 kB' 'MemUsed: 4033724 kB' 'SwapCached: 0 kB' 'Active: 1808828 kB' 'Inactive: 149892 kB' 'Active(anon): 1590800 kB' 'Inactive(anon): 0 kB' 'Active(file): 218028 kB' 'Inactive(file): 149892 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1577348 kB' 'Mapped: 105556 kB' 'AnonPages: 384556 kB' 'Shmem: 1209428 kB' 'KernelStack: 11512 kB' 'PageTables: 5528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102132 kB' 'Slab: 364488 kB' 'SReclaimable: 102132 kB' 'SUnreclaim: 262356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.382 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.383 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718496 kB' 'MemFree: 82431556 kB' 'MemUsed: 11286940 kB' 'SwapCached: 0 kB' 'Active: 4842348 kB' 'Inactive: 3359620 kB' 'Active(anon): 4669700 kB' 'Inactive(anon): 0 kB' 'Active(file): 172648 kB' 'Inactive(file): 3359620 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8052976 kB' 'Mapped: 85920 kB' 'AnonPages: 149096 kB' 'Shmem: 4520708 kB' 'KernelStack: 9304 kB' 'PageTables: 3428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124792 kB' 'Slab: 430100 kB' 'SReclaimable: 124792 kB' 'SUnreclaim: 305308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.384 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:26.385 node0=512 expecting 513 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:26.385 node1=513 expecting 512 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:26.385 00:03:26.385 real 0m2.420s 00:03:26.385 user 0m0.846s 00:03:26.385 sys 0m1.442s 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.385 01:08:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.385 ************************************ 00:03:26.385 END TEST odd_alloc 00:03:26.385 ************************************ 00:03:26.385 01:08:52 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.385 01:08:52 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:26.385 01:08:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.385 01:08:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.385 01:08:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.385 ************************************ 00:03:26.385 START TEST custom_alloc 00:03:26.385 ************************************ 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.385 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.386 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.386 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:26.386 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.386 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:26.386 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:26.386 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:26.386 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:26.386 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:26.386 01:08:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:26.386 01:08:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.386 01:08:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.924 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:28.924 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:28.924 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 175030976 kB' 'MemAvailable: 177900176 kB' 'Buffers: 4132 kB' 'Cached: 9626276 kB' 'SwapCached: 0 kB' 'Active: 6650864 kB' 'Inactive: 3509512 kB' 'Active(anon): 6260188 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532796 kB' 'Mapped: 191572 kB' 'Shmem: 5730220 kB' 'KReclaimable: 226924 kB' 'Slab: 793860 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 566936 kB' 'KernelStack: 20624 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 7769288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.924 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 175030828 kB' 'MemAvailable: 177900028 kB' 'Buffers: 4132 kB' 'Cached: 9626280 kB' 'SwapCached: 0 kB' 'Active: 6651644 kB' 'Inactive: 3509512 kB' 'Active(anon): 6260968 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534064 kB' 'Mapped: 191488 kB' 'Shmem: 5730224 kB' 'KReclaimable: 226924 kB' 'Slab: 793840 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 566916 kB' 'KernelStack: 20784 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 7770552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315708 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.926 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 175030196 kB' 'MemAvailable: 177899396 kB' 'Buffers: 4132 kB' 'Cached: 9626296 kB' 'SwapCached: 0 kB' 'Active: 6651532 kB' 'Inactive: 3509512 kB' 'Active(anon): 6260856 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533896 kB' 'Mapped: 191488 kB' 'Shmem: 5730240 kB' 'KReclaimable: 226924 kB' 'Slab: 793840 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 566916 kB' 'KernelStack: 20816 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 7770820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315708 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:28.929 nr_hugepages=1536 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.929 resv_hugepages=0 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.929 surplus_hugepages=0 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.929 anon_hugepages=0 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 175032424 kB' 'MemAvailable: 177901624 kB' 'Buffers: 4132 kB' 'Cached: 9626320 kB' 'SwapCached: 0 kB' 'Active: 6651932 kB' 'Inactive: 3509512 kB' 'Active(anon): 6261256 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533896 kB' 'Mapped: 191488 kB' 'Shmem: 5730264 kB' 'KReclaimable: 226924 kB' 'Slab: 793840 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 566916 kB' 'KernelStack: 20800 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 7770844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315708 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 93638936 kB' 'MemUsed: 4023748 kB' 'SwapCached: 0 kB' 'Active: 1808220 kB' 'Inactive: 149892 kB' 'Active(anon): 1590192 kB' 'Inactive(anon): 0 kB' 'Active(file): 218028 kB' 'Inactive(file): 149892 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1577468 kB' 'Mapped: 105568 kB' 'AnonPages: 384060 kB' 'Shmem: 1209548 kB' 'KernelStack: 11480 kB' 'PageTables: 5212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102132 kB' 'Slab: 363984 kB' 'SReclaimable: 102132 kB' 'SUnreclaim: 261852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.931 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.932 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718496 kB' 'MemFree: 81398480 kB' 'MemUsed: 12320016 kB' 'SwapCached: 0 kB' 'Active: 4842360 kB' 'Inactive: 3359620 kB' 'Active(anon): 4669712 kB' 'Inactive(anon): 0 kB' 'Active(file): 172648 kB' 'Inactive(file): 3359620 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8053000 kB' 'Mapped: 85920 kB' 'AnonPages: 149000 kB' 'Shmem: 4520732 kB' 'KernelStack: 9272 kB' 'PageTables: 3392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 124792 kB' 'Slab: 429696 kB' 'SReclaimable: 124792 kB' 'SUnreclaim: 304904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.933 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.934 node0=512 expecting 512 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:28.934 node1=1024 expecting 1024 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:28.934 00:03:28.934 real 0m2.693s 00:03:28.934 user 0m1.074s 00:03:28.934 sys 0m1.658s 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.934 01:08:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.934 ************************************ 00:03:28.934 END TEST custom_alloc 00:03:28.934 ************************************ 00:03:28.934 01:08:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:28.934 01:08:54 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:28.934 01:08:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.934 01:08:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.934 01:08:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.193 ************************************ 00:03:29.193 START TEST no_shrink_alloc 00:03:29.193 ************************************ 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.193 01:08:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.092 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.092 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.092 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176095684 kB' 'MemAvailable: 178964884 kB' 'Buffers: 4132 kB' 'Cached: 9626420 kB' 'SwapCached: 0 kB' 'Active: 6653028 kB' 'Inactive: 3509512 kB' 'Active(anon): 6262352 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534816 kB' 'Mapped: 192488 kB' 'Shmem: 5730364 kB' 'KReclaimable: 226924 kB' 'Slab: 794432 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567508 kB' 'KernelStack: 20752 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7803128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315708 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.355 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.356 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176097024 kB' 'MemAvailable: 178966224 kB' 'Buffers: 4132 kB' 'Cached: 9626424 kB' 'SwapCached: 0 kB' 'Active: 6652248 kB' 'Inactive: 3509512 kB' 'Active(anon): 6261572 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534528 kB' 'Mapped: 192388 kB' 'Shmem: 5730368 kB' 'KReclaimable: 226924 kB' 'Slab: 794420 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567496 kB' 'KernelStack: 20736 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7803148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.357 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.358 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176097024 kB' 'MemAvailable: 178966224 kB' 'Buffers: 4132 kB' 'Cached: 9626440 kB' 'SwapCached: 0 kB' 'Active: 6652264 kB' 'Inactive: 3509512 kB' 'Active(anon): 6261588 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534528 kB' 'Mapped: 192388 kB' 'Shmem: 5730384 kB' 'KReclaimable: 226924 kB' 'Slab: 794420 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567496 kB' 'KernelStack: 20736 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7803168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.359 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:31.360 nr_hugepages=1024 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.360 resv_hugepages=0 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.360 surplus_hugepages=0 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.360 anon_hugepages=0 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.360 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176098052 kB' 'MemAvailable: 178967252 kB' 'Buffers: 4132 kB' 'Cached: 9626464 kB' 'SwapCached: 0 kB' 'Active: 6652252 kB' 'Inactive: 3509512 kB' 'Active(anon): 6261576 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534492 kB' 'Mapped: 192388 kB' 'Shmem: 5730408 kB' 'KReclaimable: 226924 kB' 'Slab: 794420 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 567496 kB' 'KernelStack: 20720 kB' 'PageTables: 8716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7803192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315692 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.361 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 92595252 kB' 'MemUsed: 5067432 kB' 'SwapCached: 0 kB' 'Active: 1809908 kB' 'Inactive: 149892 kB' 'Active(anon): 1591880 kB' 'Inactive(anon): 0 kB' 'Active(file): 218028 kB' 'Inactive(file): 149892 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1577596 kB' 'Mapped: 106392 kB' 'AnonPages: 385356 kB' 'Shmem: 1209676 kB' 'KernelStack: 11576 kB' 'PageTables: 5648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102132 kB' 'Slab: 364824 kB' 'SReclaimable: 102132 kB' 'SUnreclaim: 262692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.362 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.363 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:31.364 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.364 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.364 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.364 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.364 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:31.364 node0=1024 expecting 1024 00:03:31.364 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:31.364 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:31.364 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:31.364 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:31.364 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.364 01:08:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.895 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.895 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:33.896 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:33.896 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:33.896 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176082324 kB' 'MemAvailable: 178951524 kB' 'Buffers: 4132 kB' 'Cached: 9626544 kB' 'SwapCached: 0 kB' 'Active: 6653064 kB' 'Inactive: 3509512 kB' 'Active(anon): 6262388 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535152 kB' 'Mapped: 192392 kB' 'Shmem: 5730488 kB' 'KReclaimable: 226924 kB' 'Slab: 793784 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 566860 kB' 'KernelStack: 20720 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7803716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315836 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.896 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176086608 kB' 'MemAvailable: 178955808 kB' 'Buffers: 4132 kB' 'Cached: 9626544 kB' 'SwapCached: 0 kB' 'Active: 6653588 kB' 'Inactive: 3509512 kB' 'Active(anon): 6262912 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535692 kB' 'Mapped: 192392 kB' 'Shmem: 5730488 kB' 'KReclaimable: 226924 kB' 'Slab: 793792 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 566868 kB' 'KernelStack: 20688 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7803364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315756 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.897 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.898 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176087072 kB' 'MemAvailable: 178956272 kB' 'Buffers: 4132 kB' 'Cached: 9626568 kB' 'SwapCached: 0 kB' 'Active: 6652876 kB' 'Inactive: 3509512 kB' 'Active(anon): 6262200 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535028 kB' 'Mapped: 192392 kB' 'Shmem: 5730512 kB' 'KReclaimable: 226924 kB' 'Slab: 793920 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 566996 kB' 'KernelStack: 20736 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7804536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315740 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.899 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.900 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.901 nr_hugepages=1024 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.901 resv_hugepages=0 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.901 surplus_hugepages=0 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.901 anon_hugepages=0 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 176081792 kB' 'MemAvailable: 178950992 kB' 'Buffers: 4132 kB' 'Cached: 9626600 kB' 'SwapCached: 0 kB' 'Active: 6656508 kB' 'Inactive: 3509512 kB' 'Active(anon): 6265832 kB' 'Inactive(anon): 0 kB' 'Active(file): 390676 kB' 'Inactive(file): 3509512 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538560 kB' 'Mapped: 192896 kB' 'Shmem: 5730544 kB' 'KReclaimable: 226924 kB' 'Slab: 793780 kB' 'SReclaimable: 226924 kB' 'SUnreclaim: 566856 kB' 'KernelStack: 20688 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 7807804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315724 kB' 'VmallocChunk: 0 kB' 'Percpu: 69120 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3169236 kB' 'DirectMap2M: 21676032 kB' 'DirectMap1G: 177209344 kB' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.901 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.902 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 92587604 kB' 'MemUsed: 5075080 kB' 'SwapCached: 0 kB' 'Active: 1810412 kB' 'Inactive: 149892 kB' 'Active(anon): 1592384 kB' 'Inactive(anon): 0 kB' 'Active(file): 218028 kB' 'Inactive(file): 149892 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1577704 kB' 'Mapped: 107236 kB' 'AnonPages: 385760 kB' 'Shmem: 1209784 kB' 'KernelStack: 11592 kB' 'PageTables: 5700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102132 kB' 'Slab: 364448 kB' 'SReclaimable: 102132 kB' 'SUnreclaim: 262316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.903 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.904 node0=1024 expecting 1024 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.904 00:03:33.904 real 0m4.869s 00:03:33.904 user 0m1.908s 00:03:33.904 sys 0m2.982s 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.904 01:08:59 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:33.904 ************************************ 00:03:33.904 END TEST no_shrink_alloc 00:03:33.904 ************************************ 00:03:33.904 01:08:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:33.904 01:08:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:33.904 00:03:33.904 real 0m20.022s 00:03:33.904 user 0m7.196s 00:03:33.904 sys 0m11.256s 00:03:33.904 01:08:59 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.904 01:08:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.904 ************************************ 00:03:33.904 END TEST hugepages 00:03:33.904 ************************************ 00:03:33.904 01:08:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:33.904 01:08:59 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:33.905 01:08:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.905 01:08:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.905 01:08:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:34.163 ************************************ 00:03:34.163 START TEST driver 00:03:34.163 ************************************ 00:03:34.163 01:08:59 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:34.163 * Looking for test storage... 00:03:34.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:34.163 01:08:59 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:34.163 01:08:59 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.163 01:08:59 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.350 01:09:03 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:38.350 01:09:03 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:38.350 01:09:03 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.350 01:09:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:38.350 ************************************ 00:03:38.350 START TEST guess_driver 00:03:38.350 ************************************ 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:38.350 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:38.350 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:38.350 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:38.350 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:38.350 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:38.350 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:38.350 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:38.350 Looking for driver=vfio-pci 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.350 01:09:03 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.890 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.891 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.891 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.891 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.891 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.891 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.891 01:09:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.296 01:09:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:42.296 01:09:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:42.296 01:09:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:42.296 01:09:08 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:42.296 01:09:08 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:42.296 01:09:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.296 01:09:08 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.535 00:03:46.535 real 0m8.105s 00:03:46.535 user 0m2.209s 00:03:46.535 sys 0m3.901s 00:03:46.535 01:09:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.535 01:09:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.535 ************************************ 00:03:46.535 END TEST guess_driver 00:03:46.535 ************************************ 00:03:46.535 01:09:12 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:46.535 00:03:46.535 real 0m12.204s 00:03:46.535 user 0m3.399s 00:03:46.535 sys 0m6.071s 00:03:46.535 01:09:12 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.535 01:09:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:46.535 ************************************ 00:03:46.535 END TEST driver 00:03:46.535 ************************************ 00:03:46.535 01:09:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:46.535 01:09:12 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:46.535 01:09:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.535 01:09:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.535 01:09:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.535 ************************************ 00:03:46.535 START TEST devices 00:03:46.535 ************************************ 00:03:46.535 01:09:12 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:46.535 * Looking for test storage... 00:03:46.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:46.535 01:09:12 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:46.535 01:09:12 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:46.535 01:09:12 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.535 01:09:12 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:49.819 01:09:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:49.819 01:09:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:49.819 01:09:15 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:49.819 01:09:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:49.819 01:09:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:49.819 01:09:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:49.819 01:09:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.819 01:09:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:49.819 01:09:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:49.819 01:09:15 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:49.819 No valid GPT data, bailing 00:03:49.819 01:09:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:49.819 01:09:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:49.819 01:09:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:49.819 01:09:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:49.819 01:09:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:49.819 01:09:15 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:49.819 01:09:15 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:49.819 01:09:15 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.819 01:09:15 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.819 01:09:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:49.819 ************************************ 00:03:49.819 START TEST nvme_mount 00:03:49.819 ************************************ 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:49.819 01:09:15 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:50.755 Creating new GPT entries in memory. 00:03:50.755 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:50.755 other utilities. 00:03:50.755 01:09:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:50.755 01:09:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.755 01:09:16 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:50.755 01:09:16 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:50.755 01:09:16 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:51.691 Creating new GPT entries in memory. 00:03:51.691 The operation has completed successfully. 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3187541 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.691 01:09:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:54.221 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.222 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.222 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.222 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:54.222 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.222 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.222 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.222 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:54.222 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.222 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.222 01:09:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.222 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:54.222 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:54.222 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:54.222 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.222 01:09:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.744 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.745 01:09:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:59.272 01:09:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.272 01:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.272 01:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:59.272 01:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:59.272 01:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:59.272 01:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.272 01:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.272 01:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.272 01:09:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.272 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.272 00:03:59.272 real 0m9.754s 00:03:59.272 user 0m2.553s 00:03:59.272 sys 0m4.835s 00:03:59.272 01:09:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.272 01:09:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:59.272 ************************************ 00:03:59.272 END TEST nvme_mount 00:03:59.272 ************************************ 00:03:59.272 01:09:25 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:59.272 01:09:25 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:59.272 01:09:25 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.272 01:09:25 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.272 01:09:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:59.272 ************************************ 00:03:59.272 START TEST dm_mount 00:03:59.272 ************************************ 00:03:59.272 01:09:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:59.272 01:09:25 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:59.272 01:09:25 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:59.273 01:09:25 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:00.243 Creating new GPT entries in memory. 00:04:00.243 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.243 other utilities. 00:04:00.243 01:09:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.243 01:09:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.243 01:09:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.243 01:09:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.243 01:09:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:01.620 Creating new GPT entries in memory. 00:04:01.620 The operation has completed successfully. 00:04:01.620 01:09:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:01.620 01:09:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.620 01:09:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.620 01:09:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.620 01:09:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:02.557 The operation has completed successfully. 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3191503 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.557 01:09:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:05.088 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.088 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:05.088 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:05.088 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.088 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.088 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.088 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.088 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.088 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.088 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.089 01:09:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:07.618 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:07.618 00:04:07.618 real 0m8.190s 00:04:07.618 user 0m1.739s 00:04:07.618 sys 0m3.311s 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.618 01:09:33 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:07.618 ************************************ 00:04:07.618 END TEST dm_mount 00:04:07.618 ************************************ 00:04:07.618 01:09:33 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:07.618 01:09:33 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:07.618 01:09:33 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:07.618 01:09:33 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.618 01:09:33 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.618 01:09:33 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:07.618 01:09:33 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.618 01:09:33 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:07.874 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:07.874 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:07.874 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:07.875 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:07.875 01:09:33 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:07.875 01:09:33 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.875 01:09:33 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:07.875 01:09:33 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.875 01:09:33 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:07.875 01:09:33 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.875 01:09:33 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:07.875 00:04:07.875 real 0m21.568s 00:04:07.875 user 0m5.508s 00:04:07.875 sys 0m10.430s 00:04:07.875 01:09:33 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.875 01:09:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:07.875 ************************************ 00:04:07.875 END TEST devices 00:04:07.875 ************************************ 00:04:07.875 01:09:33 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:07.875 00:04:07.875 real 1m14.327s 00:04:07.875 user 0m22.835s 00:04:07.875 sys 0m39.569s 00:04:07.875 01:09:33 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.875 01:09:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.875 ************************************ 00:04:07.875 END TEST setup.sh 00:04:07.875 ************************************ 00:04:07.875 01:09:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:07.875 01:09:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:10.398 Hugepages 00:04:10.398 node hugesize free / total 00:04:10.398 node0 1048576kB 0 / 0 00:04:10.398 node0 2048kB 2048 / 2048 00:04:10.398 node1 1048576kB 0 / 0 00:04:10.398 node1 2048kB 0 / 0 00:04:10.398 00:04:10.398 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:10.398 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:10.398 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:10.398 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:10.398 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:10.398 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:10.398 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:10.398 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:10.398 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:10.398 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:10.398 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:10.398 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:10.398 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:10.398 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:10.398 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:10.657 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:10.657 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:10.657 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:10.657 01:09:36 -- spdk/autotest.sh@130 -- # uname -s 00:04:10.657 01:09:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:10.657 01:09:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:10.657 01:09:36 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.182 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:13.182 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.553 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:14.810 01:09:40 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:15.743 01:09:41 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:15.743 01:09:41 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:15.743 01:09:41 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:15.743 01:09:41 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:15.743 01:09:41 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:15.743 01:09:41 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:15.743 01:09:41 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.743 01:09:41 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:15.743 01:09:41 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:15.743 01:09:41 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:15.743 01:09:41 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:15.743 01:09:41 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.271 Waiting for block devices as requested 00:04:18.271 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:18.530 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:18.530 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:18.530 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:18.789 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:18.789 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:18.789 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:18.789 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:19.047 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:19.047 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:19.047 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:19.306 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:19.306 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:19.306 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:19.306 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:19.563 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:19.563 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:19.563 01:09:45 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:19.564 01:09:45 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:19.564 01:09:45 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:19.564 01:09:45 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:19.564 01:09:45 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:19.564 01:09:45 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:19.564 01:09:45 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:19.564 01:09:45 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:19.564 01:09:45 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:19.564 01:09:45 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:19.564 01:09:45 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:19.564 01:09:45 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:19.564 01:09:45 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:19.564 01:09:45 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:19.564 01:09:45 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:19.564 01:09:45 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:19.564 01:09:45 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:19.564 01:09:45 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:19.564 01:09:45 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:19.564 01:09:45 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:19.564 01:09:45 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:19.564 01:09:45 -- common/autotest_common.sh@1557 -- # continue 00:04:19.823 01:09:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:19.823 01:09:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.823 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:04:19.823 01:09:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:19.823 01:09:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:19.823 01:09:45 -- common/autotest_common.sh@10 -- # set +x 00:04:19.823 01:09:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.483 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.483 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:23.861 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:23.861 01:09:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:23.861 01:09:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:23.861 01:09:49 -- common/autotest_common.sh@10 -- # set +x 00:04:23.861 01:09:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:23.861 01:09:49 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:23.861 01:09:49 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:23.861 01:09:49 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:23.861 01:09:49 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:23.861 01:09:49 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:23.861 01:09:49 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:23.861 01:09:49 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:23.861 01:09:49 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.861 01:09:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:23.861 01:09:49 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:24.120 01:09:49 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:24.120 01:09:49 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:24.120 01:09:49 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:24.120 01:09:49 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:24.120 01:09:49 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:24.120 01:09:49 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:24.120 01:09:49 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:24.120 01:09:49 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:24.120 01:09:49 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:24.120 01:09:49 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3200280 00:04:24.120 01:09:49 -- common/autotest_common.sh@1598 -- # waitforlisten 3200280 00:04:24.120 01:09:49 -- common/autotest_common.sh@829 -- # '[' -z 3200280 ']' 00:04:24.120 01:09:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.120 01:09:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.120 01:09:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.120 01:09:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.120 01:09:49 -- common/autotest_common.sh@10 -- # set +x 00:04:24.120 01:09:49 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.120 [2024-07-16 01:09:49.913318] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:04:24.120 [2024-07-16 01:09:49.913376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200280 ] 00:04:24.120 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.120 [2024-07-16 01:09:49.966882] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.120 [2024-07-16 01:09:50.051549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.057 01:09:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.057 01:09:50 -- common/autotest_common.sh@862 -- # return 0 00:04:25.057 01:09:50 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:25.057 01:09:50 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:25.057 01:09:50 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:28.342 nvme0n1 00:04:28.342 01:09:53 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:28.342 [2024-07-16 01:09:53.826485] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:28.342 request: 00:04:28.342 { 00:04:28.342 "nvme_ctrlr_name": "nvme0", 00:04:28.342 "password": "test", 00:04:28.342 "method": "bdev_nvme_opal_revert", 00:04:28.342 "req_id": 1 00:04:28.342 } 00:04:28.342 Got JSON-RPC error response 00:04:28.342 response: 00:04:28.342 { 00:04:28.342 "code": -32602, 00:04:28.342 "message": "Invalid parameters" 00:04:28.342 } 00:04:28.342 01:09:53 -- common/autotest_common.sh@1604 -- # true 00:04:28.342 01:09:53 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:28.342 01:09:53 -- common/autotest_common.sh@1608 -- # killprocess 3200280 00:04:28.342 01:09:53 -- common/autotest_common.sh@948 -- # '[' -z 3200280 ']' 00:04:28.342 01:09:53 -- common/autotest_common.sh@952 -- # kill -0 3200280 00:04:28.342 01:09:53 -- common/autotest_common.sh@953 -- # uname 00:04:28.342 01:09:53 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:28.342 01:09:53 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3200280 00:04:28.342 01:09:53 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:28.342 01:09:53 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:28.342 01:09:53 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3200280' 00:04:28.342 killing process with pid 3200280 00:04:28.342 01:09:53 -- common/autotest_common.sh@967 -- # kill 3200280 00:04:28.342 01:09:53 -- common/autotest_common.sh@972 -- # wait 3200280 00:04:30.242 01:09:56 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:30.242 01:09:56 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:30.242 01:09:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:30.242 01:09:56 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:30.242 01:09:56 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:30.242 01:09:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.242 01:09:56 -- common/autotest_common.sh@10 -- # set +x 00:04:30.242 01:09:56 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:30.242 01:09:56 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:30.242 01:09:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.242 01:09:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.242 01:09:56 -- common/autotest_common.sh@10 -- # set +x 00:04:30.242 ************************************ 00:04:30.242 START TEST env 00:04:30.242 ************************************ 00:04:30.242 01:09:56 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:30.242 * Looking for test storage... 00:04:30.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:30.242 01:09:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.242 01:09:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.242 01:09:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.242 01:09:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.242 ************************************ 00:04:30.242 START TEST env_memory 00:04:30.242 ************************************ 00:04:30.242 01:09:56 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.242 00:04:30.242 00:04:30.242 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.242 http://cunit.sourceforge.net/ 00:04:30.242 00:04:30.242 00:04:30.242 Suite: memory 00:04:30.501 Test: alloc and free memory map ...[2024-07-16 01:09:56.249448] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:30.501 passed 00:04:30.501 Test: mem map translation ...[2024-07-16 01:09:56.267148] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:30.501 [2024-07-16 01:09:56.267162] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:30.501 [2024-07-16 01:09:56.267196] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:30.501 [2024-07-16 01:09:56.267205] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:30.501 passed 00:04:30.501 Test: mem map registration ...[2024-07-16 01:09:56.303149] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:30.501 [2024-07-16 01:09:56.303163] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:30.501 passed 00:04:30.501 Test: mem map adjacent registrations ...passed 00:04:30.501 00:04:30.501 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.501 suites 1 1 n/a 0 0 00:04:30.501 tests 4 4 4 0 0 00:04:30.501 asserts 152 152 152 0 n/a 00:04:30.501 00:04:30.501 Elapsed time = 0.133 seconds 00:04:30.501 00:04:30.501 real 0m0.144s 00:04:30.501 user 0m0.132s 00:04:30.501 sys 0m0.011s 00:04:30.501 01:09:56 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.501 01:09:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:30.501 ************************************ 00:04:30.501 END TEST env_memory 00:04:30.501 ************************************ 00:04:30.501 01:09:56 env -- common/autotest_common.sh@1142 -- # return 0 00:04:30.501 01:09:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.501 01:09:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.501 01:09:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.501 01:09:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.501 ************************************ 00:04:30.501 START TEST env_vtophys 00:04:30.501 ************************************ 00:04:30.501 01:09:56 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.501 EAL: lib.eal log level changed from notice to debug 00:04:30.501 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.501 EAL: Detected lcore 1 as core 1 on socket 0 00:04:30.501 EAL: Detected lcore 2 as core 2 on socket 0 00:04:30.501 EAL: Detected lcore 3 as core 3 on socket 0 00:04:30.501 EAL: Detected lcore 4 as core 4 on socket 0 00:04:30.501 EAL: Detected lcore 5 as core 5 on socket 0 00:04:30.501 EAL: Detected lcore 6 as core 6 on socket 0 00:04:30.501 EAL: Detected lcore 7 as core 8 on socket 0 00:04:30.501 EAL: Detected lcore 8 as core 9 on socket 0 00:04:30.501 EAL: Detected lcore 9 as core 10 on socket 0 00:04:30.501 EAL: Detected lcore 10 as core 11 on socket 0 00:04:30.501 EAL: Detected lcore 11 as core 12 on socket 0 00:04:30.501 EAL: Detected lcore 12 as core 13 on socket 0 00:04:30.501 EAL: Detected lcore 13 as core 16 on socket 0 00:04:30.501 EAL: Detected lcore 14 as core 17 on socket 0 00:04:30.501 EAL: Detected lcore 15 as core 18 on socket 0 00:04:30.502 EAL: Detected lcore 16 as core 19 on socket 0 00:04:30.502 EAL: Detected lcore 17 as core 20 on socket 0 00:04:30.502 EAL: Detected lcore 18 as core 21 on socket 0 00:04:30.502 EAL: Detected lcore 19 as core 25 on socket 0 00:04:30.502 EAL: Detected lcore 20 as core 26 on socket 0 00:04:30.502 EAL: Detected lcore 21 as core 27 on socket 0 00:04:30.502 EAL: Detected lcore 22 as core 28 on socket 0 00:04:30.502 EAL: Detected lcore 23 as core 29 on socket 0 00:04:30.502 EAL: Detected lcore 24 as core 0 on socket 1 00:04:30.502 EAL: Detected lcore 25 as core 1 on socket 1 00:04:30.502 EAL: Detected lcore 26 as core 2 on socket 1 00:04:30.502 EAL: Detected lcore 27 as core 3 on socket 1 00:04:30.502 EAL: Detected lcore 28 as core 4 on socket 1 00:04:30.502 EAL: Detected lcore 29 as core 5 on socket 1 00:04:30.502 EAL: Detected lcore 30 as core 6 on socket 1 00:04:30.502 EAL: Detected lcore 31 as core 8 on socket 1 00:04:30.502 EAL: Detected lcore 32 as core 10 on socket 1 00:04:30.502 EAL: Detected lcore 33 as core 11 on socket 1 00:04:30.502 EAL: Detected lcore 34 as core 12 on socket 1 00:04:30.502 EAL: Detected lcore 35 as core 13 on socket 1 00:04:30.502 EAL: Detected lcore 36 as core 16 on socket 1 00:04:30.502 EAL: Detected lcore 37 as core 17 on socket 1 00:04:30.502 EAL: Detected lcore 38 as core 18 on socket 1 00:04:30.502 EAL: Detected lcore 39 as core 19 on socket 1 00:04:30.502 EAL: Detected lcore 40 as core 20 on socket 1 00:04:30.502 EAL: Detected lcore 41 as core 21 on socket 1 00:04:30.502 EAL: Detected lcore 42 as core 24 on socket 1 00:04:30.502 EAL: Detected lcore 43 as core 25 on socket 1 00:04:30.502 EAL: Detected lcore 44 as core 26 on socket 1 00:04:30.502 EAL: Detected lcore 45 as core 27 on socket 1 00:04:30.502 EAL: Detected lcore 46 as core 28 on socket 1 00:04:30.502 EAL: Detected lcore 47 as core 29 on socket 1 00:04:30.502 EAL: Detected lcore 48 as core 0 on socket 0 00:04:30.502 EAL: Detected lcore 49 as core 1 on socket 0 00:04:30.502 EAL: Detected lcore 50 as core 2 on socket 0 00:04:30.502 EAL: Detected lcore 51 as core 3 on socket 0 00:04:30.502 EAL: Detected lcore 52 as core 4 on socket 0 00:04:30.502 EAL: Detected lcore 53 as core 5 on socket 0 00:04:30.502 EAL: Detected lcore 54 as core 6 on socket 0 00:04:30.502 EAL: Detected lcore 55 as core 8 on socket 0 00:04:30.502 EAL: Detected lcore 56 as core 9 on socket 0 00:04:30.502 EAL: Detected lcore 57 as core 10 on socket 0 00:04:30.502 EAL: Detected lcore 58 as core 11 on socket 0 00:04:30.502 EAL: Detected lcore 59 as core 12 on socket 0 00:04:30.502 EAL: Detected lcore 60 as core 13 on socket 0 00:04:30.502 EAL: Detected lcore 61 as core 16 on socket 0 00:04:30.502 EAL: Detected lcore 62 as core 17 on socket 0 00:04:30.502 EAL: Detected lcore 63 as core 18 on socket 0 00:04:30.502 EAL: Detected lcore 64 as core 19 on socket 0 00:04:30.502 EAL: Detected lcore 65 as core 20 on socket 0 00:04:30.502 EAL: Detected lcore 66 as core 21 on socket 0 00:04:30.502 EAL: Detected lcore 67 as core 25 on socket 0 00:04:30.502 EAL: Detected lcore 68 as core 26 on socket 0 00:04:30.502 EAL: Detected lcore 69 as core 27 on socket 0 00:04:30.502 EAL: Detected lcore 70 as core 28 on socket 0 00:04:30.502 EAL: Detected lcore 71 as core 29 on socket 0 00:04:30.502 EAL: Detected lcore 72 as core 0 on socket 1 00:04:30.502 EAL: Detected lcore 73 as core 1 on socket 1 00:04:30.502 EAL: Detected lcore 74 as core 2 on socket 1 00:04:30.502 EAL: Detected lcore 75 as core 3 on socket 1 00:04:30.502 EAL: Detected lcore 76 as core 4 on socket 1 00:04:30.502 EAL: Detected lcore 77 as core 5 on socket 1 00:04:30.502 EAL: Detected lcore 78 as core 6 on socket 1 00:04:30.502 EAL: Detected lcore 79 as core 8 on socket 1 00:04:30.502 EAL: Detected lcore 80 as core 10 on socket 1 00:04:30.502 EAL: Detected lcore 81 as core 11 on socket 1 00:04:30.502 EAL: Detected lcore 82 as core 12 on socket 1 00:04:30.502 EAL: Detected lcore 83 as core 13 on socket 1 00:04:30.502 EAL: Detected lcore 84 as core 16 on socket 1 00:04:30.502 EAL: Detected lcore 85 as core 17 on socket 1 00:04:30.502 EAL: Detected lcore 86 as core 18 on socket 1 00:04:30.502 EAL: Detected lcore 87 as core 19 on socket 1 00:04:30.502 EAL: Detected lcore 88 as core 20 on socket 1 00:04:30.502 EAL: Detected lcore 89 as core 21 on socket 1 00:04:30.502 EAL: Detected lcore 90 as core 24 on socket 1 00:04:30.502 EAL: Detected lcore 91 as core 25 on socket 1 00:04:30.502 EAL: Detected lcore 92 as core 26 on socket 1 00:04:30.502 EAL: Detected lcore 93 as core 27 on socket 1 00:04:30.502 EAL: Detected lcore 94 as core 28 on socket 1 00:04:30.502 EAL: Detected lcore 95 as core 29 on socket 1 00:04:30.502 EAL: Maximum logical cores by configuration: 128 00:04:30.502 EAL: Detected CPU lcores: 96 00:04:30.502 EAL: Detected NUMA nodes: 2 00:04:30.502 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:30.502 EAL: Detected shared linkage of DPDK 00:04:30.502 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.502 EAL: Bus pci wants IOVA as 'DC' 00:04:30.502 EAL: Buses did not request a specific IOVA mode. 00:04:30.502 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:30.502 EAL: Selected IOVA mode 'VA' 00:04:30.502 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.502 EAL: Probing VFIO support... 00:04:30.502 EAL: IOMMU type 1 (Type 1) is supported 00:04:30.502 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:30.502 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:30.502 EAL: VFIO support initialized 00:04:30.502 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.502 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.502 EAL: Setting up physically contiguous memory... 00:04:30.502 EAL: Setting maximum number of open files to 524288 00:04:30.502 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.502 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:30.502 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.502 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.502 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.502 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.502 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.502 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.502 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.502 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.502 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.502 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.502 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.502 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.502 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.502 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:30.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.502 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:30.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.502 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:30.502 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:30.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.502 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:30.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.502 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:30.502 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:30.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.502 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:30.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.502 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:30.502 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:30.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.502 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:30.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.502 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:30.502 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:30.502 EAL: Hugepages will be freed exactly as allocated. 00:04:30.502 EAL: No shared files mode enabled, IPC is disabled 00:04:30.502 EAL: No shared files mode enabled, IPC is disabled 00:04:30.502 EAL: TSC frequency is ~2100000 KHz 00:04:30.502 EAL: Main lcore 0 is ready (tid=7f3de5e88a00;cpuset=[0]) 00:04:30.502 EAL: Trying to obtain current memory policy. 00:04:30.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.502 EAL: Restoring previous memory policy: 0 00:04:30.502 EAL: request: mp_malloc_sync 00:04:30.502 EAL: No shared files mode enabled, IPC is disabled 00:04:30.502 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.502 EAL: No shared files mode enabled, IPC is disabled 00:04:30.502 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.502 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.761 00:04:30.761 00:04:30.761 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.761 http://cunit.sourceforge.net/ 00:04:30.761 00:04:30.761 00:04:30.761 Suite: components_suite 00:04:30.761 Test: vtophys_malloc_test ...passed 00:04:30.761 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.761 EAL: Restoring previous memory policy: 4 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.761 EAL: Trying to obtain current memory policy. 00:04:30.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.761 EAL: Restoring previous memory policy: 4 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.761 EAL: Trying to obtain current memory policy. 00:04:30.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.761 EAL: Restoring previous memory policy: 4 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.761 EAL: Trying to obtain current memory policy. 00:04:30.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.761 EAL: Restoring previous memory policy: 4 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.761 EAL: Trying to obtain current memory policy. 00:04:30.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.761 EAL: Restoring previous memory policy: 4 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was expanded by 34MB 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was shrunk by 34MB 00:04:30.761 EAL: Trying to obtain current memory policy. 00:04:30.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.761 EAL: Restoring previous memory policy: 4 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was shrunk by 66MB 00:04:30.761 EAL: Trying to obtain current memory policy. 00:04:30.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.761 EAL: Restoring previous memory policy: 4 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was expanded by 130MB 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was shrunk by 130MB 00:04:30.761 EAL: Trying to obtain current memory policy. 00:04:30.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.761 EAL: Restoring previous memory policy: 4 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was expanded by 258MB 00:04:30.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.761 EAL: request: mp_malloc_sync 00:04:30.761 EAL: No shared files mode enabled, IPC is disabled 00:04:30.761 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.761 EAL: Trying to obtain current memory policy. 00:04:30.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.020 EAL: Restoring previous memory policy: 4 00:04:31.020 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.020 EAL: request: mp_malloc_sync 00:04:31.020 EAL: No shared files mode enabled, IPC is disabled 00:04:31.020 EAL: Heap on socket 0 was expanded by 514MB 00:04:31.020 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.020 EAL: request: mp_malloc_sync 00:04:31.020 EAL: No shared files mode enabled, IPC is disabled 00:04:31.020 EAL: Heap on socket 0 was shrunk by 514MB 00:04:31.020 EAL: Trying to obtain current memory policy. 00:04:31.020 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.278 EAL: Restoring previous memory policy: 4 00:04:31.278 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.278 EAL: request: mp_malloc_sync 00:04:31.278 EAL: No shared files mode enabled, IPC is disabled 00:04:31.278 EAL: Heap on socket 0 was expanded by 1026MB 00:04:31.536 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.536 EAL: request: mp_malloc_sync 00:04:31.536 EAL: No shared files mode enabled, IPC is disabled 00:04:31.536 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.536 passed 00:04:31.536 00:04:31.536 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.536 suites 1 1 n/a 0 0 00:04:31.536 tests 2 2 2 0 0 00:04:31.536 asserts 497 497 497 0 n/a 00:04:31.536 00:04:31.536 Elapsed time = 0.957 seconds 00:04:31.536 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.536 EAL: request: mp_malloc_sync 00:04:31.536 EAL: No shared files mode enabled, IPC is disabled 00:04:31.536 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.536 EAL: No shared files mode enabled, IPC is disabled 00:04:31.536 EAL: No shared files mode enabled, IPC is disabled 00:04:31.536 EAL: No shared files mode enabled, IPC is disabled 00:04:31.536 00:04:31.536 real 0m1.058s 00:04:31.536 user 0m0.625s 00:04:31.536 sys 0m0.406s 00:04:31.536 01:09:57 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.536 01:09:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.536 ************************************ 00:04:31.536 END TEST env_vtophys 00:04:31.536 ************************************ 00:04:31.536 01:09:57 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.536 01:09:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.536 01:09:57 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.536 01:09:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.537 01:09:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.795 ************************************ 00:04:31.795 START TEST env_pci 00:04:31.795 ************************************ 00:04:31.795 01:09:57 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.795 00:04:31.795 00:04:31.795 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.795 http://cunit.sourceforge.net/ 00:04:31.795 00:04:31.795 00:04:31.795 Suite: pci 00:04:31.795 Test: pci_hook ...[2024-07-16 01:09:57.556553] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3201617 has claimed it 00:04:31.795 EAL: Cannot find device (10000:00:01.0) 00:04:31.795 EAL: Failed to attach device on primary process 00:04:31.795 passed 00:04:31.795 00:04:31.795 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.795 suites 1 1 n/a 0 0 00:04:31.795 tests 1 1 1 0 0 00:04:31.795 asserts 25 25 25 0 n/a 00:04:31.795 00:04:31.795 Elapsed time = 0.028 seconds 00:04:31.795 00:04:31.795 real 0m0.048s 00:04:31.795 user 0m0.019s 00:04:31.795 sys 0m0.029s 00:04:31.795 01:09:57 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.795 01:09:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:31.795 ************************************ 00:04:31.795 END TEST env_pci 00:04:31.795 ************************************ 00:04:31.795 01:09:57 env -- common/autotest_common.sh@1142 -- # return 0 00:04:31.795 01:09:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.795 01:09:57 env -- env/env.sh@15 -- # uname 00:04:31.795 01:09:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.795 01:09:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.795 01:09:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.795 01:09:57 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:31.795 01:09:57 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.795 01:09:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.795 ************************************ 00:04:31.795 START TEST env_dpdk_post_init 00:04:31.795 ************************************ 00:04:31.795 01:09:57 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.795 EAL: Detected CPU lcores: 96 00:04:31.795 EAL: Detected NUMA nodes: 2 00:04:31.795 EAL: Detected shared linkage of DPDK 00:04:31.795 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.795 EAL: Selected IOVA mode 'VA' 00:04:31.795 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.795 EAL: VFIO support initialized 00:04:31.795 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.795 EAL: Using IOMMU type 1 (Type 1) 00:04:31.795 EAL: Ignore mapping IO port bar(1) 00:04:31.795 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:31.795 EAL: Ignore mapping IO port bar(1) 00:04:31.795 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:32.054 EAL: Ignore mapping IO port bar(1) 00:04:32.054 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:32.054 EAL: Ignore mapping IO port bar(1) 00:04:32.054 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:32.054 EAL: Ignore mapping IO port bar(1) 00:04:32.054 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:32.054 EAL: Ignore mapping IO port bar(1) 00:04:32.054 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:32.054 EAL: Ignore mapping IO port bar(1) 00:04:32.054 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:32.054 EAL: Ignore mapping IO port bar(1) 00:04:32.054 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:32.622 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:32.622 EAL: Ignore mapping IO port bar(1) 00:04:32.622 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:32.622 EAL: Ignore mapping IO port bar(1) 00:04:32.622 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:32.880 EAL: Ignore mapping IO port bar(1) 00:04:32.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:32.880 EAL: Ignore mapping IO port bar(1) 00:04:32.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:32.880 EAL: Ignore mapping IO port bar(1) 00:04:32.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:32.880 EAL: Ignore mapping IO port bar(1) 00:04:32.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:32.880 EAL: Ignore mapping IO port bar(1) 00:04:32.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:32.880 EAL: Ignore mapping IO port bar(1) 00:04:32.880 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:36.155 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:36.155 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:36.720 Starting DPDK initialization... 00:04:36.720 Starting SPDK post initialization... 00:04:36.720 SPDK NVMe probe 00:04:36.720 Attaching to 0000:5e:00.0 00:04:36.720 Attached to 0000:5e:00.0 00:04:36.720 Cleaning up... 00:04:36.720 00:04:36.720 real 0m4.821s 00:04:36.720 user 0m3.754s 00:04:36.720 sys 0m0.141s 00:04:36.720 01:10:02 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.720 01:10:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.720 ************************************ 00:04:36.720 END TEST env_dpdk_post_init 00:04:36.720 ************************************ 00:04:36.720 01:10:02 env -- common/autotest_common.sh@1142 -- # return 0 00:04:36.720 01:10:02 env -- env/env.sh@26 -- # uname 00:04:36.720 01:10:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.720 01:10:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.720 01:10:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.720 01:10:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.720 01:10:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.720 ************************************ 00:04:36.720 START TEST env_mem_callbacks 00:04:36.720 ************************************ 00:04:36.720 01:10:02 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.720 EAL: Detected CPU lcores: 96 00:04:36.720 EAL: Detected NUMA nodes: 2 00:04:36.720 EAL: Detected shared linkage of DPDK 00:04:36.720 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.720 EAL: Selected IOVA mode 'VA' 00:04:36.720 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.720 EAL: VFIO support initialized 00:04:36.720 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.720 00:04:36.720 00:04:36.720 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.720 http://cunit.sourceforge.net/ 00:04:36.720 00:04:36.720 00:04:36.720 Suite: memory 00:04:36.720 Test: test ... 00:04:36.720 register 0x200000200000 2097152 00:04:36.720 malloc 3145728 00:04:36.720 register 0x200000400000 4194304 00:04:36.720 buf 0x200000500000 len 3145728 PASSED 00:04:36.720 malloc 64 00:04:36.720 buf 0x2000004fff40 len 64 PASSED 00:04:36.720 malloc 4194304 00:04:36.720 register 0x200000800000 6291456 00:04:36.720 buf 0x200000a00000 len 4194304 PASSED 00:04:36.720 free 0x200000500000 3145728 00:04:36.720 free 0x2000004fff40 64 00:04:36.720 unregister 0x200000400000 4194304 PASSED 00:04:36.720 free 0x200000a00000 4194304 00:04:36.720 unregister 0x200000800000 6291456 PASSED 00:04:36.720 malloc 8388608 00:04:36.720 register 0x200000400000 10485760 00:04:36.720 buf 0x200000600000 len 8388608 PASSED 00:04:36.720 free 0x200000600000 8388608 00:04:36.720 unregister 0x200000400000 10485760 PASSED 00:04:36.720 passed 00:04:36.720 00:04:36.720 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.720 suites 1 1 n/a 0 0 00:04:36.720 tests 1 1 1 0 0 00:04:36.720 asserts 15 15 15 0 n/a 00:04:36.720 00:04:36.720 Elapsed time = 0.005 seconds 00:04:36.720 00:04:36.720 real 0m0.054s 00:04:36.720 user 0m0.019s 00:04:36.720 sys 0m0.035s 00:04:36.720 01:10:02 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.720 01:10:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:36.720 ************************************ 00:04:36.720 END TEST env_mem_callbacks 00:04:36.720 ************************************ 00:04:36.720 01:10:02 env -- common/autotest_common.sh@1142 -- # return 0 00:04:36.720 00:04:36.720 real 0m6.529s 00:04:36.720 user 0m4.706s 00:04:36.720 sys 0m0.901s 00:04:36.720 01:10:02 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.720 01:10:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.720 ************************************ 00:04:36.720 END TEST env 00:04:36.720 ************************************ 00:04:36.720 01:10:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.720 01:10:02 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.720 01:10:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.720 01:10:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.720 01:10:02 -- common/autotest_common.sh@10 -- # set +x 00:04:36.720 ************************************ 00:04:36.720 START TEST rpc 00:04:36.720 ************************************ 00:04:36.720 01:10:02 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.977 * Looking for test storage... 00:04:36.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.977 01:10:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3202655 00:04:36.977 01:10:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.977 01:10:02 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:36.977 01:10:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3202655 00:04:36.977 01:10:02 rpc -- common/autotest_common.sh@829 -- # '[' -z 3202655 ']' 00:04:36.977 01:10:02 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.977 01:10:02 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.977 01:10:02 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.977 01:10:02 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.977 01:10:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.977 [2024-07-16 01:10:02.815874] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:04:36.977 [2024-07-16 01:10:02.815924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3202655 ] 00:04:36.977 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.977 [2024-07-16 01:10:02.871664] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.977 [2024-07-16 01:10:02.950306] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.977 [2024-07-16 01:10:02.950349] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3202655' to capture a snapshot of events at runtime. 00:04:36.977 [2024-07-16 01:10:02.950356] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:36.977 [2024-07-16 01:10:02.950363] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:36.977 [2024-07-16 01:10:02.950368] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3202655 for offline analysis/debug. 00:04:36.977 [2024-07-16 01:10:02.950387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.907 01:10:03 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.907 01:10:03 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:37.907 01:10:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.907 01:10:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.907 01:10:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:37.907 01:10:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:37.907 01:10:03 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.907 01:10:03 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.907 01:10:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.907 ************************************ 00:04:37.907 START TEST rpc_integrity 00:04:37.907 ************************************ 00:04:37.907 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:37.907 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.907 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.907 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.907 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.907 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.907 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.907 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.907 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.907 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.907 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.907 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.907 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.907 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.907 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.907 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.907 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.907 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.907 { 00:04:37.907 "name": "Malloc0", 00:04:37.907 "aliases": [ 00:04:37.907 "ffaae560-d75f-43da-b6b1-27a943bbe66a" 00:04:37.907 ], 00:04:37.907 "product_name": "Malloc disk", 00:04:37.907 "block_size": 512, 00:04:37.907 "num_blocks": 16384, 00:04:37.907 "uuid": "ffaae560-d75f-43da-b6b1-27a943bbe66a", 00:04:37.907 "assigned_rate_limits": { 00:04:37.907 "rw_ios_per_sec": 0, 00:04:37.907 "rw_mbytes_per_sec": 0, 00:04:37.907 "r_mbytes_per_sec": 0, 00:04:37.907 "w_mbytes_per_sec": 0 00:04:37.907 }, 00:04:37.907 "claimed": false, 00:04:37.907 "zoned": false, 00:04:37.907 "supported_io_types": { 00:04:37.907 "read": true, 00:04:37.907 "write": true, 00:04:37.907 "unmap": true, 00:04:37.907 "flush": true, 00:04:37.907 "reset": true, 00:04:37.907 "nvme_admin": false, 00:04:37.907 "nvme_io": false, 00:04:37.907 "nvme_io_md": false, 00:04:37.907 "write_zeroes": true, 00:04:37.907 "zcopy": true, 00:04:37.907 "get_zone_info": false, 00:04:37.907 "zone_management": false, 00:04:37.907 "zone_append": false, 00:04:37.907 "compare": false, 00:04:37.907 "compare_and_write": false, 00:04:37.907 "abort": true, 00:04:37.907 "seek_hole": false, 00:04:37.907 "seek_data": false, 00:04:37.907 "copy": true, 00:04:37.907 "nvme_iov_md": false 00:04:37.907 }, 00:04:37.907 "memory_domains": [ 00:04:37.907 { 00:04:37.907 "dma_device_id": "system", 00:04:37.907 "dma_device_type": 1 00:04:37.907 }, 00:04:37.907 { 00:04:37.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.907 "dma_device_type": 2 00:04:37.907 } 00:04:37.907 ], 00:04:37.907 "driver_specific": {} 00:04:37.907 } 00:04:37.907 ]' 00:04:37.907 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.907 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.908 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.908 [2024-07-16 01:10:03.757045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.908 [2024-07-16 01:10:03.757076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.908 [2024-07-16 01:10:03.757088] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12188c0 00:04:37.908 [2024-07-16 01:10:03.757094] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.908 [2024-07-16 01:10:03.758142] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.908 [2024-07-16 01:10:03.758162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.908 Passthru0 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.908 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.908 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.908 { 00:04:37.908 "name": "Malloc0", 00:04:37.908 "aliases": [ 00:04:37.908 "ffaae560-d75f-43da-b6b1-27a943bbe66a" 00:04:37.908 ], 00:04:37.908 "product_name": "Malloc disk", 00:04:37.908 "block_size": 512, 00:04:37.908 "num_blocks": 16384, 00:04:37.908 "uuid": "ffaae560-d75f-43da-b6b1-27a943bbe66a", 00:04:37.908 "assigned_rate_limits": { 00:04:37.908 "rw_ios_per_sec": 0, 00:04:37.908 "rw_mbytes_per_sec": 0, 00:04:37.908 "r_mbytes_per_sec": 0, 00:04:37.908 "w_mbytes_per_sec": 0 00:04:37.908 }, 00:04:37.908 "claimed": true, 00:04:37.908 "claim_type": "exclusive_write", 00:04:37.908 "zoned": false, 00:04:37.908 "supported_io_types": { 00:04:37.908 "read": true, 00:04:37.908 "write": true, 00:04:37.908 "unmap": true, 00:04:37.908 "flush": true, 00:04:37.908 "reset": true, 00:04:37.908 "nvme_admin": false, 00:04:37.908 "nvme_io": false, 00:04:37.908 "nvme_io_md": false, 00:04:37.908 "write_zeroes": true, 00:04:37.908 "zcopy": true, 00:04:37.908 "get_zone_info": false, 00:04:37.908 "zone_management": false, 00:04:37.908 "zone_append": false, 00:04:37.908 "compare": false, 00:04:37.908 "compare_and_write": false, 00:04:37.908 "abort": true, 00:04:37.908 "seek_hole": false, 00:04:37.908 "seek_data": false, 00:04:37.908 "copy": true, 00:04:37.908 "nvme_iov_md": false 00:04:37.908 }, 00:04:37.908 "memory_domains": [ 00:04:37.908 { 00:04:37.908 "dma_device_id": "system", 00:04:37.908 "dma_device_type": 1 00:04:37.908 }, 00:04:37.908 { 00:04:37.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.908 "dma_device_type": 2 00:04:37.908 } 00:04:37.908 ], 00:04:37.908 "driver_specific": {} 00:04:37.908 }, 00:04:37.908 { 00:04:37.908 "name": "Passthru0", 00:04:37.908 "aliases": [ 00:04:37.908 "e05bc756-c482-5622-bd27-8ff6a668e459" 00:04:37.908 ], 00:04:37.908 "product_name": "passthru", 00:04:37.908 "block_size": 512, 00:04:37.908 "num_blocks": 16384, 00:04:37.908 "uuid": "e05bc756-c482-5622-bd27-8ff6a668e459", 00:04:37.908 "assigned_rate_limits": { 00:04:37.908 "rw_ios_per_sec": 0, 00:04:37.908 "rw_mbytes_per_sec": 0, 00:04:37.908 "r_mbytes_per_sec": 0, 00:04:37.908 "w_mbytes_per_sec": 0 00:04:37.908 }, 00:04:37.908 "claimed": false, 00:04:37.908 "zoned": false, 00:04:37.908 "supported_io_types": { 00:04:37.908 "read": true, 00:04:37.908 "write": true, 00:04:37.908 "unmap": true, 00:04:37.908 "flush": true, 00:04:37.908 "reset": true, 00:04:37.908 "nvme_admin": false, 00:04:37.908 "nvme_io": false, 00:04:37.908 "nvme_io_md": false, 00:04:37.908 "write_zeroes": true, 00:04:37.908 "zcopy": true, 00:04:37.908 "get_zone_info": false, 00:04:37.908 "zone_management": false, 00:04:37.908 "zone_append": false, 00:04:37.908 "compare": false, 00:04:37.908 "compare_and_write": false, 00:04:37.908 "abort": true, 00:04:37.908 "seek_hole": false, 00:04:37.908 "seek_data": false, 00:04:37.908 "copy": true, 00:04:37.908 "nvme_iov_md": false 00:04:37.908 }, 00:04:37.908 "memory_domains": [ 00:04:37.908 { 00:04:37.908 "dma_device_id": "system", 00:04:37.908 "dma_device_type": 1 00:04:37.908 }, 00:04:37.908 { 00:04:37.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.908 "dma_device_type": 2 00:04:37.908 } 00:04:37.908 ], 00:04:37.908 "driver_specific": { 00:04:37.908 "passthru": { 00:04:37.908 "name": "Passthru0", 00:04:37.908 "base_bdev_name": "Malloc0" 00:04:37.908 } 00:04:37.908 } 00:04:37.908 } 00:04:37.908 ]' 00:04:37.908 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.908 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.908 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.908 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.908 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.908 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:37.908 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.908 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.164 01:10:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.164 00:04:38.164 real 0m0.278s 00:04:38.164 user 0m0.173s 00:04:38.164 sys 0m0.037s 00:04:38.164 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.164 01:10:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.164 ************************************ 00:04:38.164 END TEST rpc_integrity 00:04:38.164 ************************************ 00:04:38.164 01:10:03 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.164 01:10:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:38.164 01:10:03 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.164 01:10:03 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.164 01:10:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.164 ************************************ 00:04:38.164 START TEST rpc_plugins 00:04:38.164 ************************************ 00:04:38.164 01:10:03 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:38.164 01:10:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:38.164 01:10:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.164 01:10:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.164 01:10:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.164 01:10:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:38.164 01:10:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:38.164 01:10:03 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.164 01:10:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.164 01:10:03 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.164 01:10:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:38.164 { 00:04:38.164 "name": "Malloc1", 00:04:38.164 "aliases": [ 00:04:38.164 "de9af65a-4ed8-4087-a9bd-ff92dad4c1f2" 00:04:38.164 ], 00:04:38.164 "product_name": "Malloc disk", 00:04:38.164 "block_size": 4096, 00:04:38.164 "num_blocks": 256, 00:04:38.164 "uuid": "de9af65a-4ed8-4087-a9bd-ff92dad4c1f2", 00:04:38.164 "assigned_rate_limits": { 00:04:38.164 "rw_ios_per_sec": 0, 00:04:38.164 "rw_mbytes_per_sec": 0, 00:04:38.164 "r_mbytes_per_sec": 0, 00:04:38.164 "w_mbytes_per_sec": 0 00:04:38.164 }, 00:04:38.164 "claimed": false, 00:04:38.164 "zoned": false, 00:04:38.164 "supported_io_types": { 00:04:38.164 "read": true, 00:04:38.164 "write": true, 00:04:38.164 "unmap": true, 00:04:38.164 "flush": true, 00:04:38.164 "reset": true, 00:04:38.164 "nvme_admin": false, 00:04:38.164 "nvme_io": false, 00:04:38.164 "nvme_io_md": false, 00:04:38.164 "write_zeroes": true, 00:04:38.164 "zcopy": true, 00:04:38.164 "get_zone_info": false, 00:04:38.164 "zone_management": false, 00:04:38.164 "zone_append": false, 00:04:38.164 "compare": false, 00:04:38.164 "compare_and_write": false, 00:04:38.164 "abort": true, 00:04:38.164 "seek_hole": false, 00:04:38.164 "seek_data": false, 00:04:38.164 "copy": true, 00:04:38.164 "nvme_iov_md": false 00:04:38.164 }, 00:04:38.164 "memory_domains": [ 00:04:38.164 { 00:04:38.164 "dma_device_id": "system", 00:04:38.164 "dma_device_type": 1 00:04:38.164 }, 00:04:38.164 { 00:04:38.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.164 "dma_device_type": 2 00:04:38.164 } 00:04:38.164 ], 00:04:38.164 "driver_specific": {} 00:04:38.164 } 00:04:38.164 ]' 00:04:38.164 01:10:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:38.164 01:10:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:38.164 01:10:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:38.164 01:10:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.164 01:10:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.164 01:10:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.164 01:10:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:38.164 01:10:04 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.164 01:10:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.164 01:10:04 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.164 01:10:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:38.164 01:10:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:38.164 01:10:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:38.164 00:04:38.164 real 0m0.140s 00:04:38.164 user 0m0.086s 00:04:38.164 sys 0m0.019s 00:04:38.164 01:10:04 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.164 01:10:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.164 ************************************ 00:04:38.164 END TEST rpc_plugins 00:04:38.164 ************************************ 00:04:38.164 01:10:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.164 01:10:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:38.164 01:10:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.164 01:10:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.164 01:10:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.421 ************************************ 00:04:38.421 START TEST rpc_trace_cmd_test 00:04:38.421 ************************************ 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:38.421 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3202655", 00:04:38.421 "tpoint_group_mask": "0x8", 00:04:38.421 "iscsi_conn": { 00:04:38.421 "mask": "0x2", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "scsi": { 00:04:38.421 "mask": "0x4", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "bdev": { 00:04:38.421 "mask": "0x8", 00:04:38.421 "tpoint_mask": "0xffffffffffffffff" 00:04:38.421 }, 00:04:38.421 "nvmf_rdma": { 00:04:38.421 "mask": "0x10", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "nvmf_tcp": { 00:04:38.421 "mask": "0x20", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "ftl": { 00:04:38.421 "mask": "0x40", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "blobfs": { 00:04:38.421 "mask": "0x80", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "dsa": { 00:04:38.421 "mask": "0x200", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "thread": { 00:04:38.421 "mask": "0x400", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "nvme_pcie": { 00:04:38.421 "mask": "0x800", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "iaa": { 00:04:38.421 "mask": "0x1000", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "nvme_tcp": { 00:04:38.421 "mask": "0x2000", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "bdev_nvme": { 00:04:38.421 "mask": "0x4000", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 }, 00:04:38.421 "sock": { 00:04:38.421 "mask": "0x8000", 00:04:38.421 "tpoint_mask": "0x0" 00:04:38.421 } 00:04:38.421 }' 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:38.421 00:04:38.421 real 0m0.227s 00:04:38.421 user 0m0.195s 00:04:38.421 sys 0m0.022s 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.421 01:10:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.421 ************************************ 00:04:38.421 END TEST rpc_trace_cmd_test 00:04:38.421 ************************************ 00:04:38.677 01:10:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.677 01:10:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:38.677 01:10:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:38.677 01:10:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:38.677 01:10:04 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.677 01:10:04 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.677 01:10:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.677 ************************************ 00:04:38.677 START TEST rpc_daemon_integrity 00:04:38.677 ************************************ 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.677 { 00:04:38.677 "name": "Malloc2", 00:04:38.677 "aliases": [ 00:04:38.677 "a40ca1f0-cb39-481f-907e-99224609319e" 00:04:38.677 ], 00:04:38.677 "product_name": "Malloc disk", 00:04:38.677 "block_size": 512, 00:04:38.677 "num_blocks": 16384, 00:04:38.677 "uuid": "a40ca1f0-cb39-481f-907e-99224609319e", 00:04:38.677 "assigned_rate_limits": { 00:04:38.677 "rw_ios_per_sec": 0, 00:04:38.677 "rw_mbytes_per_sec": 0, 00:04:38.677 "r_mbytes_per_sec": 0, 00:04:38.677 "w_mbytes_per_sec": 0 00:04:38.677 }, 00:04:38.677 "claimed": false, 00:04:38.677 "zoned": false, 00:04:38.677 "supported_io_types": { 00:04:38.677 "read": true, 00:04:38.677 "write": true, 00:04:38.677 "unmap": true, 00:04:38.677 "flush": true, 00:04:38.677 "reset": true, 00:04:38.677 "nvme_admin": false, 00:04:38.677 "nvme_io": false, 00:04:38.677 "nvme_io_md": false, 00:04:38.677 "write_zeroes": true, 00:04:38.677 "zcopy": true, 00:04:38.677 "get_zone_info": false, 00:04:38.677 "zone_management": false, 00:04:38.677 "zone_append": false, 00:04:38.677 "compare": false, 00:04:38.677 "compare_and_write": false, 00:04:38.677 "abort": true, 00:04:38.677 "seek_hole": false, 00:04:38.677 "seek_data": false, 00:04:38.677 "copy": true, 00:04:38.677 "nvme_iov_md": false 00:04:38.677 }, 00:04:38.677 "memory_domains": [ 00:04:38.677 { 00:04:38.677 "dma_device_id": "system", 00:04:38.677 "dma_device_type": 1 00:04:38.677 }, 00:04:38.677 { 00:04:38.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.677 "dma_device_type": 2 00:04:38.677 } 00:04:38.677 ], 00:04:38.677 "driver_specific": {} 00:04:38.677 } 00:04:38.677 ]' 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.677 [2024-07-16 01:10:04.591285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:38.677 [2024-07-16 01:10:04.591310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.677 [2024-07-16 01:10:04.591324] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1219210 00:04:38.677 [2024-07-16 01:10:04.591330] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.677 [2024-07-16 01:10:04.592258] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.677 [2024-07-16 01:10:04.592276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.677 Passthru0 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.677 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.677 { 00:04:38.677 "name": "Malloc2", 00:04:38.677 "aliases": [ 00:04:38.677 "a40ca1f0-cb39-481f-907e-99224609319e" 00:04:38.677 ], 00:04:38.677 "product_name": "Malloc disk", 00:04:38.677 "block_size": 512, 00:04:38.677 "num_blocks": 16384, 00:04:38.677 "uuid": "a40ca1f0-cb39-481f-907e-99224609319e", 00:04:38.677 "assigned_rate_limits": { 00:04:38.677 "rw_ios_per_sec": 0, 00:04:38.677 "rw_mbytes_per_sec": 0, 00:04:38.677 "r_mbytes_per_sec": 0, 00:04:38.677 "w_mbytes_per_sec": 0 00:04:38.677 }, 00:04:38.677 "claimed": true, 00:04:38.677 "claim_type": "exclusive_write", 00:04:38.677 "zoned": false, 00:04:38.677 "supported_io_types": { 00:04:38.677 "read": true, 00:04:38.677 "write": true, 00:04:38.677 "unmap": true, 00:04:38.677 "flush": true, 00:04:38.677 "reset": true, 00:04:38.677 "nvme_admin": false, 00:04:38.677 "nvme_io": false, 00:04:38.677 "nvme_io_md": false, 00:04:38.677 "write_zeroes": true, 00:04:38.677 "zcopy": true, 00:04:38.677 "get_zone_info": false, 00:04:38.677 "zone_management": false, 00:04:38.677 "zone_append": false, 00:04:38.677 "compare": false, 00:04:38.677 "compare_and_write": false, 00:04:38.677 "abort": true, 00:04:38.677 "seek_hole": false, 00:04:38.677 "seek_data": false, 00:04:38.677 "copy": true, 00:04:38.677 "nvme_iov_md": false 00:04:38.678 }, 00:04:38.678 "memory_domains": [ 00:04:38.678 { 00:04:38.678 "dma_device_id": "system", 00:04:38.678 "dma_device_type": 1 00:04:38.678 }, 00:04:38.678 { 00:04:38.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.678 "dma_device_type": 2 00:04:38.678 } 00:04:38.678 ], 00:04:38.678 "driver_specific": {} 00:04:38.678 }, 00:04:38.678 { 00:04:38.678 "name": "Passthru0", 00:04:38.678 "aliases": [ 00:04:38.678 "159e2aab-60b6-5eac-adb6-655bfe874ec0" 00:04:38.678 ], 00:04:38.678 "product_name": "passthru", 00:04:38.678 "block_size": 512, 00:04:38.678 "num_blocks": 16384, 00:04:38.678 "uuid": "159e2aab-60b6-5eac-adb6-655bfe874ec0", 00:04:38.678 "assigned_rate_limits": { 00:04:38.678 "rw_ios_per_sec": 0, 00:04:38.678 "rw_mbytes_per_sec": 0, 00:04:38.678 "r_mbytes_per_sec": 0, 00:04:38.678 "w_mbytes_per_sec": 0 00:04:38.678 }, 00:04:38.678 "claimed": false, 00:04:38.678 "zoned": false, 00:04:38.678 "supported_io_types": { 00:04:38.678 "read": true, 00:04:38.678 "write": true, 00:04:38.678 "unmap": true, 00:04:38.678 "flush": true, 00:04:38.678 "reset": true, 00:04:38.678 "nvme_admin": false, 00:04:38.678 "nvme_io": false, 00:04:38.678 "nvme_io_md": false, 00:04:38.678 "write_zeroes": true, 00:04:38.678 "zcopy": true, 00:04:38.678 "get_zone_info": false, 00:04:38.678 "zone_management": false, 00:04:38.678 "zone_append": false, 00:04:38.678 "compare": false, 00:04:38.678 "compare_and_write": false, 00:04:38.678 "abort": true, 00:04:38.678 "seek_hole": false, 00:04:38.678 "seek_data": false, 00:04:38.678 "copy": true, 00:04:38.678 "nvme_iov_md": false 00:04:38.678 }, 00:04:38.678 "memory_domains": [ 00:04:38.678 { 00:04:38.678 "dma_device_id": "system", 00:04:38.678 "dma_device_type": 1 00:04:38.678 }, 00:04:38.678 { 00:04:38.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.678 "dma_device_type": 2 00:04:38.678 } 00:04:38.678 ], 00:04:38.678 "driver_specific": { 00:04:38.678 "passthru": { 00:04:38.678 "name": "Passthru0", 00:04:38.678 "base_bdev_name": "Malloc2" 00:04:38.678 } 00:04:38.678 } 00:04:38.678 } 00:04:38.678 ]' 00:04:38.678 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.678 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.678 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.678 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.935 00:04:38.935 real 0m0.270s 00:04:38.935 user 0m0.175s 00:04:38.935 sys 0m0.034s 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.935 01:10:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.935 ************************************ 00:04:38.935 END TEST rpc_daemon_integrity 00:04:38.935 ************************************ 00:04:38.935 01:10:04 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.935 01:10:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:38.935 01:10:04 rpc -- rpc/rpc.sh@84 -- # killprocess 3202655 00:04:38.935 01:10:04 rpc -- common/autotest_common.sh@948 -- # '[' -z 3202655 ']' 00:04:38.935 01:10:04 rpc -- common/autotest_common.sh@952 -- # kill -0 3202655 00:04:38.935 01:10:04 rpc -- common/autotest_common.sh@953 -- # uname 00:04:38.935 01:10:04 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.935 01:10:04 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3202655 00:04:38.935 01:10:04 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.935 01:10:04 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.935 01:10:04 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3202655' 00:04:38.935 killing process with pid 3202655 00:04:38.935 01:10:04 rpc -- common/autotest_common.sh@967 -- # kill 3202655 00:04:38.935 01:10:04 rpc -- common/autotest_common.sh@972 -- # wait 3202655 00:04:39.191 00:04:39.191 real 0m2.434s 00:04:39.191 user 0m3.152s 00:04:39.191 sys 0m0.648s 00:04:39.191 01:10:05 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.191 01:10:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.191 ************************************ 00:04:39.191 END TEST rpc 00:04:39.191 ************************************ 00:04:39.191 01:10:05 -- common/autotest_common.sh@1142 -- # return 0 00:04:39.191 01:10:05 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:39.191 01:10:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.191 01:10:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.191 01:10:05 -- common/autotest_common.sh@10 -- # set +x 00:04:39.191 ************************************ 00:04:39.191 START TEST skip_rpc 00:04:39.191 ************************************ 00:04:39.191 01:10:05 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:39.448 * Looking for test storage... 00:04:39.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:39.448 01:10:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:39.448 01:10:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:39.448 01:10:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:39.448 01:10:05 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.448 01:10:05 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.448 01:10:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.448 ************************************ 00:04:39.448 START TEST skip_rpc 00:04:39.448 ************************************ 00:04:39.448 01:10:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:39.448 01:10:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3203288 00:04:39.448 01:10:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.448 01:10:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:39.448 01:10:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:39.448 [2024-07-16 01:10:05.316330] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:04:39.448 [2024-07-16 01:10:05.316378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3203288 ] 00:04:39.448 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.448 [2024-07-16 01:10:05.370958] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.705 [2024-07-16 01:10:05.443341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3203288 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3203288 ']' 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3203288 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3203288 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3203288' 00:04:44.959 killing process with pid 3203288 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3203288 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3203288 00:04:44.959 00:04:44.959 real 0m5.369s 00:04:44.959 user 0m5.136s 00:04:44.959 sys 0m0.259s 00:04:44.959 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.960 01:10:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.960 ************************************ 00:04:44.960 END TEST skip_rpc 00:04:44.960 ************************************ 00:04:44.960 01:10:10 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:44.960 01:10:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:44.960 01:10:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.960 01:10:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.960 01:10:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.960 ************************************ 00:04:44.960 START TEST skip_rpc_with_json 00:04:44.960 ************************************ 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3204229 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3204229 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3204229 ']' 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.960 01:10:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.960 [2024-07-16 01:10:10.748979] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:04:44.960 [2024-07-16 01:10:10.749021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204229 ] 00:04:44.960 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.960 [2024-07-16 01:10:10.804353] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.960 [2024-07-16 01:10:10.872398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.891 [2024-07-16 01:10:11.539584] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:45.891 request: 00:04:45.891 { 00:04:45.891 "trtype": "tcp", 00:04:45.891 "method": "nvmf_get_transports", 00:04:45.891 "req_id": 1 00:04:45.891 } 00:04:45.891 Got JSON-RPC error response 00:04:45.891 response: 00:04:45.891 { 00:04:45.891 "code": -19, 00:04:45.891 "message": "No such device" 00:04:45.891 } 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.891 [2024-07-16 01:10:11.551687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.891 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.892 01:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.892 { 00:04:45.892 "subsystems": [ 00:04:45.892 { 00:04:45.892 "subsystem": "vfio_user_target", 00:04:45.892 "config": null 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "keyring", 00:04:45.892 "config": [] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "iobuf", 00:04:45.892 "config": [ 00:04:45.892 { 00:04:45.892 "method": "iobuf_set_options", 00:04:45.892 "params": { 00:04:45.892 "small_pool_count": 8192, 00:04:45.892 "large_pool_count": 1024, 00:04:45.892 "small_bufsize": 8192, 00:04:45.892 "large_bufsize": 135168 00:04:45.892 } 00:04:45.892 } 00:04:45.892 ] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "sock", 00:04:45.892 "config": [ 00:04:45.892 { 00:04:45.892 "method": "sock_set_default_impl", 00:04:45.892 "params": { 00:04:45.892 "impl_name": "posix" 00:04:45.892 } 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "method": "sock_impl_set_options", 00:04:45.892 "params": { 00:04:45.892 "impl_name": "ssl", 00:04:45.892 "recv_buf_size": 4096, 00:04:45.892 "send_buf_size": 4096, 00:04:45.892 "enable_recv_pipe": true, 00:04:45.892 "enable_quickack": false, 00:04:45.892 "enable_placement_id": 0, 00:04:45.892 "enable_zerocopy_send_server": true, 00:04:45.892 "enable_zerocopy_send_client": false, 00:04:45.892 "zerocopy_threshold": 0, 00:04:45.892 "tls_version": 0, 00:04:45.892 "enable_ktls": false 00:04:45.892 } 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "method": "sock_impl_set_options", 00:04:45.892 "params": { 00:04:45.892 "impl_name": "posix", 00:04:45.892 "recv_buf_size": 2097152, 00:04:45.892 "send_buf_size": 2097152, 00:04:45.892 "enable_recv_pipe": true, 00:04:45.892 "enable_quickack": false, 00:04:45.892 "enable_placement_id": 0, 00:04:45.892 "enable_zerocopy_send_server": true, 00:04:45.892 "enable_zerocopy_send_client": false, 00:04:45.892 "zerocopy_threshold": 0, 00:04:45.892 "tls_version": 0, 00:04:45.892 "enable_ktls": false 00:04:45.892 } 00:04:45.892 } 00:04:45.892 ] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "vmd", 00:04:45.892 "config": [] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "accel", 00:04:45.892 "config": [ 00:04:45.892 { 00:04:45.892 "method": "accel_set_options", 00:04:45.892 "params": { 00:04:45.892 "small_cache_size": 128, 00:04:45.892 "large_cache_size": 16, 00:04:45.892 "task_count": 2048, 00:04:45.892 "sequence_count": 2048, 00:04:45.892 "buf_count": 2048 00:04:45.892 } 00:04:45.892 } 00:04:45.892 ] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "bdev", 00:04:45.892 "config": [ 00:04:45.892 { 00:04:45.892 "method": "bdev_set_options", 00:04:45.892 "params": { 00:04:45.892 "bdev_io_pool_size": 65535, 00:04:45.892 "bdev_io_cache_size": 256, 00:04:45.892 "bdev_auto_examine": true, 00:04:45.892 "iobuf_small_cache_size": 128, 00:04:45.892 "iobuf_large_cache_size": 16 00:04:45.892 } 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "method": "bdev_raid_set_options", 00:04:45.892 "params": { 00:04:45.892 "process_window_size_kb": 1024 00:04:45.892 } 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "method": "bdev_iscsi_set_options", 00:04:45.892 "params": { 00:04:45.892 "timeout_sec": 30 00:04:45.892 } 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "method": "bdev_nvme_set_options", 00:04:45.892 "params": { 00:04:45.892 "action_on_timeout": "none", 00:04:45.892 "timeout_us": 0, 00:04:45.892 "timeout_admin_us": 0, 00:04:45.892 "keep_alive_timeout_ms": 10000, 00:04:45.892 "arbitration_burst": 0, 00:04:45.892 "low_priority_weight": 0, 00:04:45.892 "medium_priority_weight": 0, 00:04:45.892 "high_priority_weight": 0, 00:04:45.892 "nvme_adminq_poll_period_us": 10000, 00:04:45.892 "nvme_ioq_poll_period_us": 0, 00:04:45.892 "io_queue_requests": 0, 00:04:45.892 "delay_cmd_submit": true, 00:04:45.892 "transport_retry_count": 4, 00:04:45.892 "bdev_retry_count": 3, 00:04:45.892 "transport_ack_timeout": 0, 00:04:45.892 "ctrlr_loss_timeout_sec": 0, 00:04:45.892 "reconnect_delay_sec": 0, 00:04:45.892 "fast_io_fail_timeout_sec": 0, 00:04:45.892 "disable_auto_failback": false, 00:04:45.892 "generate_uuids": false, 00:04:45.892 "transport_tos": 0, 00:04:45.892 "nvme_error_stat": false, 00:04:45.892 "rdma_srq_size": 0, 00:04:45.892 "io_path_stat": false, 00:04:45.892 "allow_accel_sequence": false, 00:04:45.892 "rdma_max_cq_size": 0, 00:04:45.892 "rdma_cm_event_timeout_ms": 0, 00:04:45.892 "dhchap_digests": [ 00:04:45.892 "sha256", 00:04:45.892 "sha384", 00:04:45.892 "sha512" 00:04:45.892 ], 00:04:45.892 "dhchap_dhgroups": [ 00:04:45.892 "null", 00:04:45.892 "ffdhe2048", 00:04:45.892 "ffdhe3072", 00:04:45.892 "ffdhe4096", 00:04:45.892 "ffdhe6144", 00:04:45.892 "ffdhe8192" 00:04:45.892 ] 00:04:45.892 } 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "method": "bdev_nvme_set_hotplug", 00:04:45.892 "params": { 00:04:45.892 "period_us": 100000, 00:04:45.892 "enable": false 00:04:45.892 } 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "method": "bdev_wait_for_examine" 00:04:45.892 } 00:04:45.892 ] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "scsi", 00:04:45.892 "config": null 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "scheduler", 00:04:45.892 "config": [ 00:04:45.892 { 00:04:45.892 "method": "framework_set_scheduler", 00:04:45.892 "params": { 00:04:45.892 "name": "static" 00:04:45.892 } 00:04:45.892 } 00:04:45.892 ] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "vhost_scsi", 00:04:45.892 "config": [] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "vhost_blk", 00:04:45.892 "config": [] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "ublk", 00:04:45.892 "config": [] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "nbd", 00:04:45.892 "config": [] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "nvmf", 00:04:45.892 "config": [ 00:04:45.892 { 00:04:45.892 "method": "nvmf_set_config", 00:04:45.892 "params": { 00:04:45.892 "discovery_filter": "match_any", 00:04:45.892 "admin_cmd_passthru": { 00:04:45.892 "identify_ctrlr": false 00:04:45.892 } 00:04:45.892 } 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "method": "nvmf_set_max_subsystems", 00:04:45.892 "params": { 00:04:45.892 "max_subsystems": 1024 00:04:45.892 } 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "method": "nvmf_set_crdt", 00:04:45.892 "params": { 00:04:45.892 "crdt1": 0, 00:04:45.892 "crdt2": 0, 00:04:45.892 "crdt3": 0 00:04:45.892 } 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "method": "nvmf_create_transport", 00:04:45.892 "params": { 00:04:45.892 "trtype": "TCP", 00:04:45.892 "max_queue_depth": 128, 00:04:45.892 "max_io_qpairs_per_ctrlr": 127, 00:04:45.892 "in_capsule_data_size": 4096, 00:04:45.892 "max_io_size": 131072, 00:04:45.892 "io_unit_size": 131072, 00:04:45.892 "max_aq_depth": 128, 00:04:45.892 "num_shared_buffers": 511, 00:04:45.892 "buf_cache_size": 4294967295, 00:04:45.892 "dif_insert_or_strip": false, 00:04:45.892 "zcopy": false, 00:04:45.892 "c2h_success": true, 00:04:45.892 "sock_priority": 0, 00:04:45.892 "abort_timeout_sec": 1, 00:04:45.892 "ack_timeout": 0, 00:04:45.892 "data_wr_pool_size": 0 00:04:45.892 } 00:04:45.892 } 00:04:45.892 ] 00:04:45.892 }, 00:04:45.892 { 00:04:45.892 "subsystem": "iscsi", 00:04:45.892 "config": [ 00:04:45.892 { 00:04:45.892 "method": "iscsi_set_options", 00:04:45.892 "params": { 00:04:45.892 "node_base": "iqn.2016-06.io.spdk", 00:04:45.892 "max_sessions": 128, 00:04:45.892 "max_connections_per_session": 2, 00:04:45.892 "max_queue_depth": 64, 00:04:45.892 "default_time2wait": 2, 00:04:45.892 "default_time2retain": 20, 00:04:45.892 "first_burst_length": 8192, 00:04:45.892 "immediate_data": true, 00:04:45.892 "allow_duplicated_isid": false, 00:04:45.892 "error_recovery_level": 0, 00:04:45.892 "nop_timeout": 60, 00:04:45.892 "nop_in_interval": 30, 00:04:45.892 "disable_chap": false, 00:04:45.892 "require_chap": false, 00:04:45.892 "mutual_chap": false, 00:04:45.892 "chap_group": 0, 00:04:45.892 "max_large_datain_per_connection": 64, 00:04:45.892 "max_r2t_per_connection": 4, 00:04:45.892 "pdu_pool_size": 36864, 00:04:45.892 "immediate_data_pool_size": 16384, 00:04:45.892 "data_out_pool_size": 2048 00:04:45.892 } 00:04:45.892 } 00:04:45.892 ] 00:04:45.892 } 00:04:45.892 ] 00:04:45.892 } 00:04:45.892 01:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:45.892 01:10:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3204229 00:04:45.893 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3204229 ']' 00:04:45.893 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3204229 00:04:45.893 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:45.893 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.893 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3204229 00:04:45.893 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.893 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.893 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3204229' 00:04:45.893 killing process with pid 3204229 00:04:45.893 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3204229 00:04:45.893 01:10:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3204229 00:04:46.149 01:10:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:46.149 01:10:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3204475 00:04:46.149 01:10:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:51.401 01:10:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3204475 00:04:51.401 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3204475 ']' 00:04:51.401 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3204475 00:04:51.401 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:51.401 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.401 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3204475 00:04:51.401 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.401 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.401 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3204475' 00:04:51.401 killing process with pid 3204475 00:04:51.401 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3204475 00:04:51.401 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3204475 00:04:51.658 01:10:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.659 00:04:51.659 real 0m6.717s 00:04:51.659 user 0m6.581s 00:04:51.659 sys 0m0.542s 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.659 ************************************ 00:04:51.659 END TEST skip_rpc_with_json 00:04:51.659 ************************************ 00:04:51.659 01:10:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.659 01:10:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:51.659 01:10:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.659 01:10:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.659 01:10:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.659 ************************************ 00:04:51.659 START TEST skip_rpc_with_delay 00:04:51.659 ************************************ 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:51.659 [2024-07-16 01:10:17.534859] app.c: 837:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:51.659 [2024-07-16 01:10:17.534917] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.659 00:04:51.659 real 0m0.064s 00:04:51.659 user 0m0.044s 00:04:51.659 sys 0m0.020s 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.659 01:10:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:51.659 ************************************ 00:04:51.659 END TEST skip_rpc_with_delay 00:04:51.659 ************************************ 00:04:51.659 01:10:17 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:51.659 01:10:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:51.659 01:10:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:51.659 01:10:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:51.659 01:10:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.659 01:10:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.659 01:10:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.659 ************************************ 00:04:51.659 START TEST exit_on_failed_rpc_init 00:04:51.659 ************************************ 00:04:51.659 01:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:51.659 01:10:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3205442 00:04:51.659 01:10:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3205442 00:04:51.659 01:10:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:51.659 01:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3205442 ']' 00:04:51.659 01:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.659 01:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.659 01:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.659 01:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.659 01:10:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.916 [2024-07-16 01:10:17.656014] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:04:51.916 [2024-07-16 01:10:17.656056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205442 ] 00:04:51.916 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.916 [2024-07-16 01:10:17.709428] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.916 [2024-07-16 01:10:17.787958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:52.480 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:52.736 [2024-07-16 01:10:18.474202] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:04:52.736 [2024-07-16 01:10:18.474249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205470 ] 00:04:52.736 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.736 [2024-07-16 01:10:18.522070] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.736 [2024-07-16 01:10:18.594595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.736 [2024-07-16 01:10:18.594658] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:52.736 [2024-07-16 01:10:18.594666] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:52.736 [2024-07-16 01:10:18.594672] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3205442 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3205442 ']' 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3205442 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3205442 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3205442' 00:04:52.736 killing process with pid 3205442 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3205442 00:04:52.736 01:10:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3205442 00:04:53.301 00:04:53.301 real 0m1.409s 00:04:53.301 user 0m1.616s 00:04:53.301 sys 0m0.368s 00:04:53.301 01:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.301 01:10:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:53.301 ************************************ 00:04:53.301 END TEST exit_on_failed_rpc_init 00:04:53.301 ************************************ 00:04:53.301 01:10:19 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.301 01:10:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:53.301 00:04:53.301 real 0m13.888s 00:04:53.301 user 0m13.519s 00:04:53.301 sys 0m1.401s 00:04:53.301 01:10:19 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.301 01:10:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.301 ************************************ 00:04:53.301 END TEST skip_rpc 00:04:53.301 ************************************ 00:04:53.301 01:10:19 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.301 01:10:19 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.301 01:10:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.301 01:10:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.301 01:10:19 -- common/autotest_common.sh@10 -- # set +x 00:04:53.301 ************************************ 00:04:53.301 START TEST rpc_client 00:04:53.301 ************************************ 00:04:53.301 01:10:19 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:53.301 * Looking for test storage... 00:04:53.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:53.301 01:10:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:53.301 OK 00:04:53.301 01:10:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:53.301 00:04:53.301 real 0m0.106s 00:04:53.301 user 0m0.047s 00:04:53.301 sys 0m0.065s 00:04:53.301 01:10:19 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.301 01:10:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:53.301 ************************************ 00:04:53.301 END TEST rpc_client 00:04:53.301 ************************************ 00:04:53.301 01:10:19 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.301 01:10:19 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.301 01:10:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.301 01:10:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.301 01:10:19 -- common/autotest_common.sh@10 -- # set +x 00:04:53.301 ************************************ 00:04:53.301 START TEST json_config 00:04:53.301 ************************************ 00:04:53.301 01:10:19 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.559 01:10:19 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.559 01:10:19 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.559 01:10:19 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.559 01:10:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.559 01:10:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.559 01:10:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.559 01:10:19 json_config -- paths/export.sh@5 -- # export PATH 00:04:53.559 01:10:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@47 -- # : 0 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:53.559 01:10:19 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:53.559 INFO: JSON configuration test init 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:53.559 01:10:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.559 01:10:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:53.559 01:10:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.559 01:10:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.559 01:10:19 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:53.559 01:10:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:53.559 01:10:19 json_config -- json_config/common.sh@10 -- # shift 00:04:53.559 01:10:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.559 01:10:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.559 01:10:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.559 01:10:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.559 01:10:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.559 01:10:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3205802 00:04:53.560 01:10:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.560 Waiting for target to run... 00:04:53.560 01:10:19 json_config -- json_config/common.sh@25 -- # waitforlisten 3205802 /var/tmp/spdk_tgt.sock 00:04:53.560 01:10:19 json_config -- common/autotest_common.sh@829 -- # '[' -z 3205802 ']' 00:04:53.560 01:10:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:53.560 01:10:19 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.560 01:10:19 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.560 01:10:19 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.560 01:10:19 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.560 01:10:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.560 [2024-07-16 01:10:19.381276] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:04:53.560 [2024-07-16 01:10:19.381323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205802 ] 00:04:53.560 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.817 [2024-07-16 01:10:19.642906] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.817 [2024-07-16 01:10:19.707621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.383 01:10:20 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.383 01:10:20 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:54.383 01:10:20 json_config -- json_config/common.sh@26 -- # echo '' 00:04:54.383 00:04:54.383 01:10:20 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:54.383 01:10:20 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:54.383 01:10:20 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.383 01:10:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.383 01:10:20 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:54.383 01:10:20 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:54.383 01:10:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.383 01:10:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.383 01:10:20 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:54.383 01:10:20 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:54.383 01:10:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:57.727 01:10:23 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:57.727 01:10:23 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:57.727 01:10:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.727 01:10:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.727 01:10:23 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:57.727 01:10:23 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:57.727 01:10:23 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:57.727 01:10:23 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:57.727 01:10:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:57.727 01:10:23 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:57.727 01:10:23 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:57.727 01:10:23 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:57.727 01:10:23 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:57.727 01:10:23 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:57.727 01:10:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.728 01:10:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:57.728 01:10:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.728 01:10:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.728 01:10:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.728 MallocForNvmf0 00:04:57.728 01:10:23 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:57.728 01:10:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:57.983 MallocForNvmf1 00:04:57.983 01:10:23 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:57.983 01:10:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:57.983 [2024-07-16 01:10:23.967726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.240 01:10:23 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:58.241 01:10:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:58.241 01:10:24 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:58.241 01:10:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:58.498 01:10:24 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:58.498 01:10:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:58.756 01:10:24 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:58.756 01:10:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:58.756 [2024-07-16 01:10:24.649817] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:58.756 01:10:24 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:58.756 01:10:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.756 01:10:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.756 01:10:24 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:58.756 01:10:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:58.756 01:10:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.756 01:10:24 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:58.756 01:10:24 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:58.756 01:10:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:59.013 MallocBdevForConfigChangeCheck 00:04:59.013 01:10:24 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:59.013 01:10:24 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.013 01:10:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.013 01:10:24 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:59.013 01:10:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.271 01:10:25 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:59.271 INFO: shutting down applications... 00:04:59.271 01:10:25 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:59.271 01:10:25 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:59.271 01:10:25 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:59.271 01:10:25 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:01.801 Calling clear_iscsi_subsystem 00:05:01.801 Calling clear_nvmf_subsystem 00:05:01.801 Calling clear_nbd_subsystem 00:05:01.801 Calling clear_ublk_subsystem 00:05:01.801 Calling clear_vhost_blk_subsystem 00:05:01.801 Calling clear_vhost_scsi_subsystem 00:05:01.801 Calling clear_bdev_subsystem 00:05:01.801 01:10:27 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:01.801 01:10:27 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:01.801 01:10:27 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:01.801 01:10:27 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.801 01:10:27 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:01.801 01:10:27 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:01.801 01:10:27 json_config -- json_config/json_config.sh@345 -- # break 00:05:01.801 01:10:27 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:01.801 01:10:27 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:01.801 01:10:27 json_config -- json_config/common.sh@31 -- # local app=target 00:05:01.801 01:10:27 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:01.801 01:10:27 json_config -- json_config/common.sh@35 -- # [[ -n 3205802 ]] 00:05:01.801 01:10:27 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3205802 00:05:01.801 01:10:27 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:01.801 01:10:27 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.801 01:10:27 json_config -- json_config/common.sh@41 -- # kill -0 3205802 00:05:01.801 01:10:27 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.368 01:10:28 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.368 01:10:28 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.368 01:10:28 json_config -- json_config/common.sh@41 -- # kill -0 3205802 00:05:02.368 01:10:28 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:02.368 01:10:28 json_config -- json_config/common.sh@43 -- # break 00:05:02.368 01:10:28 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:02.368 01:10:28 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:02.368 SPDK target shutdown done 00:05:02.368 01:10:28 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:02.368 INFO: relaunching applications... 00:05:02.368 01:10:28 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.368 01:10:28 json_config -- json_config/common.sh@9 -- # local app=target 00:05:02.368 01:10:28 json_config -- json_config/common.sh@10 -- # shift 00:05:02.368 01:10:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.368 01:10:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.368 01:10:28 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.368 01:10:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.368 01:10:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.368 01:10:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3207435 00:05:02.368 01:10:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.368 Waiting for target to run... 00:05:02.368 01:10:28 json_config -- json_config/common.sh@25 -- # waitforlisten 3207435 /var/tmp/spdk_tgt.sock 00:05:02.368 01:10:28 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.368 01:10:28 json_config -- common/autotest_common.sh@829 -- # '[' -z 3207435 ']' 00:05:02.368 01:10:28 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.368 01:10:28 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.368 01:10:28 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.368 01:10:28 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.368 01:10:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.368 [2024-07-16 01:10:28.211397] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:02.368 [2024-07-16 01:10:28.211451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207435 ] 00:05:02.368 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.935 [2024-07-16 01:10:28.643791] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.935 [2024-07-16 01:10:28.730946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.221 [2024-07-16 01:10:31.742137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.221 [2024-07-16 01:10:31.774441] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:06.479 01:10:32 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.479 01:10:32 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:06.479 01:10:32 json_config -- json_config/common.sh@26 -- # echo '' 00:05:06.479 00:05:06.479 01:10:32 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:06.479 01:10:32 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:06.479 INFO: Checking if target configuration is the same... 00:05:06.479 01:10:32 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.479 01:10:32 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:06.479 01:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.479 + '[' 2 -ne 2 ']' 00:05:06.479 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:06.479 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:06.479 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.479 +++ basename /dev/fd/62 00:05:06.479 ++ mktemp /tmp/62.XXX 00:05:06.479 + tmp_file_1=/tmp/62.VhY 00:05:06.479 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.479 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:06.479 + tmp_file_2=/tmp/spdk_tgt_config.json.rxU 00:05:06.479 + ret=0 00:05:06.479 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.738 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.738 + diff -u /tmp/62.VhY /tmp/spdk_tgt_config.json.rxU 00:05:06.738 + echo 'INFO: JSON config files are the same' 00:05:06.738 INFO: JSON config files are the same 00:05:06.738 + rm /tmp/62.VhY /tmp/spdk_tgt_config.json.rxU 00:05:06.738 + exit 0 00:05:06.738 01:10:32 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:06.738 01:10:32 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:06.738 INFO: changing configuration and checking if this can be detected... 00:05:06.738 01:10:32 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:06.738 01:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:06.996 01:10:32 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:06.996 01:10:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.996 01:10:32 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.996 + '[' 2 -ne 2 ']' 00:05:06.996 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:06.996 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:06.996 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:06.996 +++ basename /dev/fd/62 00:05:06.996 ++ mktemp /tmp/62.XXX 00:05:06.996 + tmp_file_1=/tmp/62.5WW 00:05:06.996 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.996 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:06.996 + tmp_file_2=/tmp/spdk_tgt_config.json.w6L 00:05:06.996 + ret=0 00:05:06.996 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.254 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.254 + diff -u /tmp/62.5WW /tmp/spdk_tgt_config.json.w6L 00:05:07.254 + ret=1 00:05:07.254 + echo '=== Start of file: /tmp/62.5WW ===' 00:05:07.254 + cat /tmp/62.5WW 00:05:07.254 + echo '=== End of file: /tmp/62.5WW ===' 00:05:07.254 + echo '' 00:05:07.254 + echo '=== Start of file: /tmp/spdk_tgt_config.json.w6L ===' 00:05:07.254 + cat /tmp/spdk_tgt_config.json.w6L 00:05:07.254 + echo '=== End of file: /tmp/spdk_tgt_config.json.w6L ===' 00:05:07.254 + echo '' 00:05:07.254 + rm /tmp/62.5WW /tmp/spdk_tgt_config.json.w6L 00:05:07.254 + exit 1 00:05:07.254 01:10:33 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:07.254 INFO: configuration change detected. 00:05:07.254 01:10:33 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:07.254 01:10:33 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:07.254 01:10:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.254 01:10:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.254 01:10:33 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:07.254 01:10:33 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:07.254 01:10:33 json_config -- json_config/json_config.sh@317 -- # [[ -n 3207435 ]] 00:05:07.255 01:10:33 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:07.255 01:10:33 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:07.255 01:10:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.255 01:10:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.255 01:10:33 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:07.255 01:10:33 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:07.255 01:10:33 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:07.255 01:10:33 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:07.255 01:10:33 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:07.255 01:10:33 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:07.255 01:10:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.255 01:10:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.513 01:10:33 json_config -- json_config/json_config.sh@323 -- # killprocess 3207435 00:05:07.513 01:10:33 json_config -- common/autotest_common.sh@948 -- # '[' -z 3207435 ']' 00:05:07.513 01:10:33 json_config -- common/autotest_common.sh@952 -- # kill -0 3207435 00:05:07.513 01:10:33 json_config -- common/autotest_common.sh@953 -- # uname 00:05:07.513 01:10:33 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:07.513 01:10:33 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3207435 00:05:07.513 01:10:33 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:07.513 01:10:33 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:07.513 01:10:33 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3207435' 00:05:07.513 killing process with pid 3207435 00:05:07.513 01:10:33 json_config -- common/autotest_common.sh@967 -- # kill 3207435 00:05:07.513 01:10:33 json_config -- common/autotest_common.sh@972 -- # wait 3207435 00:05:09.416 01:10:35 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.416 01:10:35 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:09.416 01:10:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:09.416 01:10:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.675 01:10:35 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:09.675 01:10:35 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:09.675 INFO: Success 00:05:09.675 00:05:09.675 real 0m16.164s 00:05:09.676 user 0m16.878s 00:05:09.676 sys 0m1.801s 00:05:09.676 01:10:35 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.676 01:10:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.676 ************************************ 00:05:09.676 END TEST json_config 00:05:09.676 ************************************ 00:05:09.676 01:10:35 -- common/autotest_common.sh@1142 -- # return 0 00:05:09.676 01:10:35 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:09.676 01:10:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.676 01:10:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.676 01:10:35 -- common/autotest_common.sh@10 -- # set +x 00:05:09.676 ************************************ 00:05:09.676 START TEST json_config_extra_key 00:05:09.676 ************************************ 00:05:09.676 01:10:35 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:09.676 01:10:35 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.676 01:10:35 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.676 01:10:35 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.676 01:10:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.676 01:10:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.676 01:10:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.676 01:10:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:09.676 01:10:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:09.676 01:10:35 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:09.676 INFO: launching applications... 00:05:09.676 01:10:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:09.676 01:10:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:09.676 01:10:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:09.676 01:10:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.676 01:10:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.676 01:10:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.676 01:10:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.676 01:10:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.676 01:10:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3208799 00:05:09.676 01:10:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.676 Waiting for target to run... 00:05:09.676 01:10:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3208799 /var/tmp/spdk_tgt.sock 00:05:09.676 01:10:35 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3208799 ']' 00:05:09.676 01:10:35 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:09.676 01:10:35 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.676 01:10:35 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.676 01:10:35 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.676 01:10:35 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.676 01:10:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.676 [2024-07-16 01:10:35.634852] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:09.676 [2024-07-16 01:10:35.634903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208799 ] 00:05:09.676 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.935 [2024-07-16 01:10:35.904313] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.194 [2024-07-16 01:10:35.972777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.453 01:10:36 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.453 01:10:36 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:10.453 01:10:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:10.453 00:05:10.453 01:10:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:10.453 INFO: shutting down applications... 00:05:10.453 01:10:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:10.453 01:10:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:10.453 01:10:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.453 01:10:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3208799 ]] 00:05:10.453 01:10:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3208799 00:05:10.453 01:10:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.453 01:10:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.453 01:10:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3208799 00:05:10.453 01:10:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.020 01:10:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.020 01:10:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.020 01:10:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3208799 00:05:11.020 01:10:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.020 01:10:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:11.020 01:10:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.020 01:10:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.020 SPDK target shutdown done 00:05:11.020 01:10:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:11.020 Success 00:05:11.020 00:05:11.020 real 0m1.431s 00:05:11.020 user 0m1.194s 00:05:11.020 sys 0m0.368s 00:05:11.020 01:10:36 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.020 01:10:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.020 ************************************ 00:05:11.020 END TEST json_config_extra_key 00:05:11.020 ************************************ 00:05:11.020 01:10:36 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.020 01:10:36 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.020 01:10:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.020 01:10:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.020 01:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:11.020 ************************************ 00:05:11.020 START TEST alias_rpc 00:05:11.020 ************************************ 00:05:11.020 01:10:36 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.278 * Looking for test storage... 00:05:11.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:11.278 01:10:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:11.278 01:10:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3209081 00:05:11.278 01:10:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.278 01:10:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3209081 00:05:11.278 01:10:37 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3209081 ']' 00:05:11.278 01:10:37 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.278 01:10:37 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.278 01:10:37 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.278 01:10:37 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.278 01:10:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.278 [2024-07-16 01:10:37.111554] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:11.278 [2024-07-16 01:10:37.111601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209081 ] 00:05:11.278 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.278 [2024-07-16 01:10:37.165318] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.278 [2024-07-16 01:10:37.237837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.212 01:10:37 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.212 01:10:37 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:12.212 01:10:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:12.212 01:10:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3209081 00:05:12.213 01:10:38 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3209081 ']' 00:05:12.213 01:10:38 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3209081 00:05:12.213 01:10:38 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:12.213 01:10:38 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.213 01:10:38 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3209081 00:05:12.213 01:10:38 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.213 01:10:38 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.213 01:10:38 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3209081' 00:05:12.213 killing process with pid 3209081 00:05:12.213 01:10:38 alias_rpc -- common/autotest_common.sh@967 -- # kill 3209081 00:05:12.213 01:10:38 alias_rpc -- common/autotest_common.sh@972 -- # wait 3209081 00:05:12.470 00:05:12.470 real 0m1.468s 00:05:12.470 user 0m1.616s 00:05:12.470 sys 0m0.384s 00:05:12.470 01:10:38 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.470 01:10:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.470 ************************************ 00:05:12.470 END TEST alias_rpc 00:05:12.470 ************************************ 00:05:12.729 01:10:38 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.729 01:10:38 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:12.729 01:10:38 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:12.729 01:10:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.729 01:10:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.729 01:10:38 -- common/autotest_common.sh@10 -- # set +x 00:05:12.729 ************************************ 00:05:12.729 START TEST spdkcli_tcp 00:05:12.729 ************************************ 00:05:12.729 01:10:38 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:12.729 * Looking for test storage... 00:05:12.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:12.729 01:10:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:12.729 01:10:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:12.729 01:10:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:12.729 01:10:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:12.729 01:10:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:12.729 01:10:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:12.729 01:10:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:12.729 01:10:38 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.729 01:10:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.729 01:10:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3209371 00:05:12.729 01:10:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3209371 00:05:12.729 01:10:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:12.729 01:10:38 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3209371 ']' 00:05:12.729 01:10:38 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.729 01:10:38 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.729 01:10:38 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.729 01:10:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.729 01:10:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.729 [2024-07-16 01:10:38.656227] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:12.729 [2024-07-16 01:10:38.656268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209371 ] 00:05:12.729 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.729 [2024-07-16 01:10:38.711082] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.987 [2024-07-16 01:10:38.791074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.987 [2024-07-16 01:10:38.791076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.554 01:10:39 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.554 01:10:39 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:13.554 01:10:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3209600 00:05:13.554 01:10:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:13.554 01:10:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:13.813 [ 00:05:13.813 "bdev_malloc_delete", 00:05:13.813 "bdev_malloc_create", 00:05:13.813 "bdev_null_resize", 00:05:13.813 "bdev_null_delete", 00:05:13.813 "bdev_null_create", 00:05:13.813 "bdev_nvme_cuse_unregister", 00:05:13.813 "bdev_nvme_cuse_register", 00:05:13.813 "bdev_opal_new_user", 00:05:13.813 "bdev_opal_set_lock_state", 00:05:13.813 "bdev_opal_delete", 00:05:13.813 "bdev_opal_get_info", 00:05:13.813 "bdev_opal_create", 00:05:13.813 "bdev_nvme_opal_revert", 00:05:13.813 "bdev_nvme_opal_init", 00:05:13.813 "bdev_nvme_send_cmd", 00:05:13.813 "bdev_nvme_get_path_iostat", 00:05:13.813 "bdev_nvme_get_mdns_discovery_info", 00:05:13.813 "bdev_nvme_stop_mdns_discovery", 00:05:13.813 "bdev_nvme_start_mdns_discovery", 00:05:13.813 "bdev_nvme_set_multipath_policy", 00:05:13.813 "bdev_nvme_set_preferred_path", 00:05:13.813 "bdev_nvme_get_io_paths", 00:05:13.813 "bdev_nvme_remove_error_injection", 00:05:13.813 "bdev_nvme_add_error_injection", 00:05:13.813 "bdev_nvme_get_discovery_info", 00:05:13.813 "bdev_nvme_stop_discovery", 00:05:13.813 "bdev_nvme_start_discovery", 00:05:13.813 "bdev_nvme_get_controller_health_info", 00:05:13.813 "bdev_nvme_disable_controller", 00:05:13.813 "bdev_nvme_enable_controller", 00:05:13.813 "bdev_nvme_reset_controller", 00:05:13.813 "bdev_nvme_get_transport_statistics", 00:05:13.813 "bdev_nvme_apply_firmware", 00:05:13.813 "bdev_nvme_detach_controller", 00:05:13.813 "bdev_nvme_get_controllers", 00:05:13.813 "bdev_nvme_attach_controller", 00:05:13.813 "bdev_nvme_set_hotplug", 00:05:13.813 "bdev_nvme_set_options", 00:05:13.813 "bdev_passthru_delete", 00:05:13.813 "bdev_passthru_create", 00:05:13.813 "bdev_lvol_set_parent_bdev", 00:05:13.813 "bdev_lvol_set_parent", 00:05:13.813 "bdev_lvol_check_shallow_copy", 00:05:13.813 "bdev_lvol_start_shallow_copy", 00:05:13.813 "bdev_lvol_grow_lvstore", 00:05:13.813 "bdev_lvol_get_lvols", 00:05:13.813 "bdev_lvol_get_lvstores", 00:05:13.813 "bdev_lvol_delete", 00:05:13.813 "bdev_lvol_set_read_only", 00:05:13.813 "bdev_lvol_resize", 00:05:13.813 "bdev_lvol_decouple_parent", 00:05:13.813 "bdev_lvol_inflate", 00:05:13.813 "bdev_lvol_rename", 00:05:13.813 "bdev_lvol_clone_bdev", 00:05:13.813 "bdev_lvol_clone", 00:05:13.813 "bdev_lvol_snapshot", 00:05:13.813 "bdev_lvol_create", 00:05:13.813 "bdev_lvol_delete_lvstore", 00:05:13.813 "bdev_lvol_rename_lvstore", 00:05:13.813 "bdev_lvol_create_lvstore", 00:05:13.813 "bdev_raid_set_options", 00:05:13.813 "bdev_raid_remove_base_bdev", 00:05:13.813 "bdev_raid_add_base_bdev", 00:05:13.813 "bdev_raid_delete", 00:05:13.813 "bdev_raid_create", 00:05:13.813 "bdev_raid_get_bdevs", 00:05:13.813 "bdev_error_inject_error", 00:05:13.813 "bdev_error_delete", 00:05:13.813 "bdev_error_create", 00:05:13.813 "bdev_split_delete", 00:05:13.813 "bdev_split_create", 00:05:13.813 "bdev_delay_delete", 00:05:13.813 "bdev_delay_create", 00:05:13.813 "bdev_delay_update_latency", 00:05:13.813 "bdev_zone_block_delete", 00:05:13.813 "bdev_zone_block_create", 00:05:13.813 "blobfs_create", 00:05:13.813 "blobfs_detect", 00:05:13.813 "blobfs_set_cache_size", 00:05:13.813 "bdev_aio_delete", 00:05:13.813 "bdev_aio_rescan", 00:05:13.813 "bdev_aio_create", 00:05:13.813 "bdev_ftl_set_property", 00:05:13.813 "bdev_ftl_get_properties", 00:05:13.813 "bdev_ftl_get_stats", 00:05:13.813 "bdev_ftl_unmap", 00:05:13.813 "bdev_ftl_unload", 00:05:13.813 "bdev_ftl_delete", 00:05:13.813 "bdev_ftl_load", 00:05:13.813 "bdev_ftl_create", 00:05:13.813 "bdev_virtio_attach_controller", 00:05:13.813 "bdev_virtio_scsi_get_devices", 00:05:13.813 "bdev_virtio_detach_controller", 00:05:13.813 "bdev_virtio_blk_set_hotplug", 00:05:13.813 "bdev_iscsi_delete", 00:05:13.813 "bdev_iscsi_create", 00:05:13.813 "bdev_iscsi_set_options", 00:05:13.813 "accel_error_inject_error", 00:05:13.813 "ioat_scan_accel_module", 00:05:13.813 "dsa_scan_accel_module", 00:05:13.813 "iaa_scan_accel_module", 00:05:13.813 "vfu_virtio_create_scsi_endpoint", 00:05:13.813 "vfu_virtio_scsi_remove_target", 00:05:13.813 "vfu_virtio_scsi_add_target", 00:05:13.813 "vfu_virtio_create_blk_endpoint", 00:05:13.813 "vfu_virtio_delete_endpoint", 00:05:13.813 "keyring_file_remove_key", 00:05:13.813 "keyring_file_add_key", 00:05:13.813 "keyring_linux_set_options", 00:05:13.813 "iscsi_get_histogram", 00:05:13.813 "iscsi_enable_histogram", 00:05:13.813 "iscsi_set_options", 00:05:13.813 "iscsi_get_auth_groups", 00:05:13.813 "iscsi_auth_group_remove_secret", 00:05:13.813 "iscsi_auth_group_add_secret", 00:05:13.813 "iscsi_delete_auth_group", 00:05:13.813 "iscsi_create_auth_group", 00:05:13.813 "iscsi_set_discovery_auth", 00:05:13.813 "iscsi_get_options", 00:05:13.813 "iscsi_target_node_request_logout", 00:05:13.813 "iscsi_target_node_set_redirect", 00:05:13.813 "iscsi_target_node_set_auth", 00:05:13.813 "iscsi_target_node_add_lun", 00:05:13.813 "iscsi_get_stats", 00:05:13.813 "iscsi_get_connections", 00:05:13.813 "iscsi_portal_group_set_auth", 00:05:13.813 "iscsi_start_portal_group", 00:05:13.813 "iscsi_delete_portal_group", 00:05:13.813 "iscsi_create_portal_group", 00:05:13.813 "iscsi_get_portal_groups", 00:05:13.813 "iscsi_delete_target_node", 00:05:13.813 "iscsi_target_node_remove_pg_ig_maps", 00:05:13.813 "iscsi_target_node_add_pg_ig_maps", 00:05:13.813 "iscsi_create_target_node", 00:05:13.813 "iscsi_get_target_nodes", 00:05:13.813 "iscsi_delete_initiator_group", 00:05:13.813 "iscsi_initiator_group_remove_initiators", 00:05:13.813 "iscsi_initiator_group_add_initiators", 00:05:13.813 "iscsi_create_initiator_group", 00:05:13.813 "iscsi_get_initiator_groups", 00:05:13.813 "nvmf_set_crdt", 00:05:13.813 "nvmf_set_config", 00:05:13.813 "nvmf_set_max_subsystems", 00:05:13.813 "nvmf_stop_mdns_prr", 00:05:13.813 "nvmf_publish_mdns_prr", 00:05:13.813 "nvmf_subsystem_get_listeners", 00:05:13.813 "nvmf_subsystem_get_qpairs", 00:05:13.813 "nvmf_subsystem_get_controllers", 00:05:13.813 "nvmf_get_stats", 00:05:13.813 "nvmf_get_transports", 00:05:13.813 "nvmf_create_transport", 00:05:13.813 "nvmf_get_targets", 00:05:13.813 "nvmf_delete_target", 00:05:13.813 "nvmf_create_target", 00:05:13.813 "nvmf_subsystem_allow_any_host", 00:05:13.813 "nvmf_subsystem_remove_host", 00:05:13.813 "nvmf_subsystem_add_host", 00:05:13.813 "nvmf_ns_remove_host", 00:05:13.813 "nvmf_ns_add_host", 00:05:13.813 "nvmf_subsystem_remove_ns", 00:05:13.813 "nvmf_subsystem_add_ns", 00:05:13.813 "nvmf_subsystem_listener_set_ana_state", 00:05:13.813 "nvmf_discovery_get_referrals", 00:05:13.813 "nvmf_discovery_remove_referral", 00:05:13.813 "nvmf_discovery_add_referral", 00:05:13.813 "nvmf_subsystem_remove_listener", 00:05:13.813 "nvmf_subsystem_add_listener", 00:05:13.813 "nvmf_delete_subsystem", 00:05:13.813 "nvmf_create_subsystem", 00:05:13.813 "nvmf_get_subsystems", 00:05:13.813 "env_dpdk_get_mem_stats", 00:05:13.813 "nbd_get_disks", 00:05:13.813 "nbd_stop_disk", 00:05:13.813 "nbd_start_disk", 00:05:13.813 "ublk_recover_disk", 00:05:13.813 "ublk_get_disks", 00:05:13.813 "ublk_stop_disk", 00:05:13.813 "ublk_start_disk", 00:05:13.813 "ublk_destroy_target", 00:05:13.813 "ublk_create_target", 00:05:13.813 "virtio_blk_create_transport", 00:05:13.813 "virtio_blk_get_transports", 00:05:13.813 "vhost_controller_set_coalescing", 00:05:13.813 "vhost_get_controllers", 00:05:13.813 "vhost_delete_controller", 00:05:13.813 "vhost_create_blk_controller", 00:05:13.813 "vhost_scsi_controller_remove_target", 00:05:13.813 "vhost_scsi_controller_add_target", 00:05:13.813 "vhost_start_scsi_controller", 00:05:13.813 "vhost_create_scsi_controller", 00:05:13.813 "thread_set_cpumask", 00:05:13.813 "framework_get_governor", 00:05:13.813 "framework_get_scheduler", 00:05:13.813 "framework_set_scheduler", 00:05:13.813 "framework_get_reactors", 00:05:13.813 "thread_get_io_channels", 00:05:13.813 "thread_get_pollers", 00:05:13.813 "thread_get_stats", 00:05:13.813 "framework_monitor_context_switch", 00:05:13.813 "spdk_kill_instance", 00:05:13.813 "log_enable_timestamps", 00:05:13.813 "log_get_flags", 00:05:13.813 "log_clear_flag", 00:05:13.813 "log_set_flag", 00:05:13.813 "log_get_level", 00:05:13.813 "log_set_level", 00:05:13.813 "log_get_print_level", 00:05:13.813 "log_set_print_level", 00:05:13.813 "framework_enable_cpumask_locks", 00:05:13.813 "framework_disable_cpumask_locks", 00:05:13.813 "framework_wait_init", 00:05:13.813 "framework_start_init", 00:05:13.813 "scsi_get_devices", 00:05:13.813 "bdev_get_histogram", 00:05:13.813 "bdev_enable_histogram", 00:05:13.813 "bdev_set_qos_limit", 00:05:13.813 "bdev_set_qd_sampling_period", 00:05:13.813 "bdev_get_bdevs", 00:05:13.813 "bdev_reset_iostat", 00:05:13.813 "bdev_get_iostat", 00:05:13.813 "bdev_examine", 00:05:13.813 "bdev_wait_for_examine", 00:05:13.813 "bdev_set_options", 00:05:13.813 "notify_get_notifications", 00:05:13.813 "notify_get_types", 00:05:13.813 "accel_get_stats", 00:05:13.813 "accel_set_options", 00:05:13.813 "accel_set_driver", 00:05:13.813 "accel_crypto_key_destroy", 00:05:13.813 "accel_crypto_keys_get", 00:05:13.813 "accel_crypto_key_create", 00:05:13.813 "accel_assign_opc", 00:05:13.813 "accel_get_module_info", 00:05:13.813 "accel_get_opc_assignments", 00:05:13.813 "vmd_rescan", 00:05:13.813 "vmd_remove_device", 00:05:13.813 "vmd_enable", 00:05:13.813 "sock_get_default_impl", 00:05:13.813 "sock_set_default_impl", 00:05:13.813 "sock_impl_set_options", 00:05:13.813 "sock_impl_get_options", 00:05:13.813 "iobuf_get_stats", 00:05:13.814 "iobuf_set_options", 00:05:13.814 "keyring_get_keys", 00:05:13.814 "framework_get_pci_devices", 00:05:13.814 "framework_get_config", 00:05:13.814 "framework_get_subsystems", 00:05:13.814 "vfu_tgt_set_base_path", 00:05:13.814 "trace_get_info", 00:05:13.814 "trace_get_tpoint_group_mask", 00:05:13.814 "trace_disable_tpoint_group", 00:05:13.814 "trace_enable_tpoint_group", 00:05:13.814 "trace_clear_tpoint_mask", 00:05:13.814 "trace_set_tpoint_mask", 00:05:13.814 "spdk_get_version", 00:05:13.814 "rpc_get_methods" 00:05:13.814 ] 00:05:13.814 01:10:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.814 01:10:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:13.814 01:10:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3209371 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3209371 ']' 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3209371 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3209371 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3209371' 00:05:13.814 killing process with pid 3209371 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3209371 00:05:13.814 01:10:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3209371 00:05:14.072 00:05:14.072 real 0m1.502s 00:05:14.072 user 0m2.794s 00:05:14.072 sys 0m0.428s 00:05:14.072 01:10:40 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.072 01:10:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.072 ************************************ 00:05:14.072 END TEST spdkcli_tcp 00:05:14.072 ************************************ 00:05:14.072 01:10:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.072 01:10:40 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.072 01:10:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.072 01:10:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.072 01:10:40 -- common/autotest_common.sh@10 -- # set +x 00:05:14.329 ************************************ 00:05:14.329 START TEST dpdk_mem_utility 00:05:14.329 ************************************ 00:05:14.329 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.329 * Looking for test storage... 00:05:14.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:14.329 01:10:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:14.329 01:10:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3209670 00:05:14.329 01:10:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3209670 00:05:14.329 01:10:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.329 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3209670 ']' 00:05:14.329 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.329 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.329 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.329 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.329 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.329 [2024-07-16 01:10:40.197455] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:14.329 [2024-07-16 01:10:40.197500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209670 ] 00:05:14.329 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.329 [2024-07-16 01:10:40.251767] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.586 [2024-07-16 01:10:40.325521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.150 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.150 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:15.150 01:10:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:15.150 01:10:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:15.150 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.150 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.150 { 00:05:15.150 "filename": "/tmp/spdk_mem_dump.txt" 00:05:15.150 } 00:05:15.150 01:10:40 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.150 01:10:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:15.150 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:15.150 1 heaps totaling size 814.000000 MiB 00:05:15.150 size: 814.000000 MiB heap id: 0 00:05:15.150 end heaps---------- 00:05:15.150 8 mempools totaling size 598.116089 MiB 00:05:15.150 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:15.150 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:15.150 size: 84.521057 MiB name: bdev_io_3209670 00:05:15.150 size: 51.011292 MiB name: evtpool_3209670 00:05:15.150 size: 50.003479 MiB name: msgpool_3209670 00:05:15.150 size: 21.763794 MiB name: PDU_Pool 00:05:15.150 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:15.150 size: 0.026123 MiB name: Session_Pool 00:05:15.150 end mempools------- 00:05:15.150 6 memzones totaling size 4.142822 MiB 00:05:15.150 size: 1.000366 MiB name: RG_ring_0_3209670 00:05:15.150 size: 1.000366 MiB name: RG_ring_1_3209670 00:05:15.150 size: 1.000366 MiB name: RG_ring_4_3209670 00:05:15.150 size: 1.000366 MiB name: RG_ring_5_3209670 00:05:15.150 size: 0.125366 MiB name: RG_ring_2_3209670 00:05:15.150 size: 0.015991 MiB name: RG_ring_3_3209670 00:05:15.150 end memzones------- 00:05:15.150 01:10:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:15.150 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:15.150 list of free elements. size: 12.519348 MiB 00:05:15.150 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:15.150 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:15.150 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:15.150 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:15.150 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:15.150 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:15.150 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:15.150 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:15.150 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:15.150 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:15.150 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:15.150 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:15.150 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:15.150 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:15.150 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:15.150 list of standard malloc elements. size: 199.218079 MiB 00:05:15.150 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:15.150 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:15.150 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:15.150 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:15.150 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:15.150 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:15.150 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:15.150 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:15.150 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:15.150 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:15.150 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:15.150 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:15.150 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:15.150 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:15.150 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:15.150 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:15.150 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:15.150 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:15.150 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:15.150 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:15.150 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:15.150 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:15.150 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:15.150 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:15.151 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:15.151 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:15.151 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:15.151 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:15.151 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:15.151 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:15.151 list of memzone associated elements. size: 602.262573 MiB 00:05:15.151 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:15.151 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:15.151 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:15.151 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:15.151 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:15.151 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3209670_0 00:05:15.151 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:15.151 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3209670_0 00:05:15.151 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:15.151 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3209670_0 00:05:15.151 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:15.151 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:15.151 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:15.151 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:15.151 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:15.151 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3209670 00:05:15.151 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:15.151 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3209670 00:05:15.151 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:15.151 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3209670 00:05:15.151 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:15.151 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:15.151 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:15.151 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:15.151 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:15.151 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:15.151 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:15.151 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:15.151 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:15.151 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3209670 00:05:15.151 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:15.151 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3209670 00:05:15.151 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:15.151 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3209670 00:05:15.151 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:15.151 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3209670 00:05:15.151 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:15.151 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3209670 00:05:15.151 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:15.151 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:15.151 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:15.151 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:15.151 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:15.151 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:15.151 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:15.151 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3209670 00:05:15.151 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:15.151 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:15.151 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:15.151 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:15.151 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:15.151 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3209670 00:05:15.151 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:15.151 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:15.151 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:15.151 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3209670 00:05:15.151 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:15.151 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3209670 00:05:15.151 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:15.151 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:15.151 01:10:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:15.151 01:10:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3209670 00:05:15.151 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3209670 ']' 00:05:15.151 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3209670 00:05:15.151 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:15.151 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.151 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3209670 00:05:15.409 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.409 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.409 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3209670' 00:05:15.409 killing process with pid 3209670 00:05:15.409 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3209670 00:05:15.409 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3209670 00:05:15.667 00:05:15.667 real 0m1.377s 00:05:15.667 user 0m1.477s 00:05:15.667 sys 0m0.359s 00:05:15.667 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.667 01:10:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.667 ************************************ 00:05:15.667 END TEST dpdk_mem_utility 00:05:15.667 ************************************ 00:05:15.667 01:10:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.667 01:10:41 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:15.667 01:10:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.667 01:10:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.667 01:10:41 -- common/autotest_common.sh@10 -- # set +x 00:05:15.667 ************************************ 00:05:15.667 START TEST event 00:05:15.667 ************************************ 00:05:15.667 01:10:41 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:15.667 * Looking for test storage... 00:05:15.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:15.667 01:10:41 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:15.667 01:10:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:15.667 01:10:41 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.667 01:10:41 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:15.667 01:10:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.667 01:10:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.667 ************************************ 00:05:15.667 START TEST event_perf 00:05:15.667 ************************************ 00:05:15.667 01:10:41 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.667 Running I/O for 1 seconds...[2024-07-16 01:10:41.632478] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:15.667 [2024-07-16 01:10:41.632545] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209964 ] 00:05:15.924 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.924 [2024-07-16 01:10:41.694124] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.924 [2024-07-16 01:10:41.768495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.924 [2024-07-16 01:10:41.768589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.924 [2024-07-16 01:10:41.768687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.924 [2024-07-16 01:10:41.768689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.859 Running I/O for 1 seconds... 00:05:16.859 lcore 0: 206485 00:05:16.859 lcore 1: 206484 00:05:16.859 lcore 2: 206485 00:05:16.859 lcore 3: 206487 00:05:16.859 done. 00:05:16.859 00:05:16.859 real 0m1.226s 00:05:16.859 user 0m4.140s 00:05:16.859 sys 0m0.085s 00:05:16.859 01:10:42 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.859 01:10:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.859 ************************************ 00:05:16.859 END TEST event_perf 00:05:16.859 ************************************ 00:05:17.118 01:10:42 event -- common/autotest_common.sh@1142 -- # return 0 00:05:17.118 01:10:42 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:17.118 01:10:42 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:17.118 01:10:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.118 01:10:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.118 ************************************ 00:05:17.118 START TEST event_reactor 00:05:17.118 ************************************ 00:05:17.118 01:10:42 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:17.118 [2024-07-16 01:10:42.912632] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:17.118 [2024-07-16 01:10:42.912691] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210216 ] 00:05:17.118 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.118 [2024-07-16 01:10:42.969509] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.118 [2024-07-16 01:10:43.045920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.494 test_start 00:05:18.494 oneshot 00:05:18.494 tick 100 00:05:18.494 tick 100 00:05:18.494 tick 250 00:05:18.494 tick 100 00:05:18.494 tick 100 00:05:18.494 tick 250 00:05:18.494 tick 100 00:05:18.494 tick 500 00:05:18.494 tick 100 00:05:18.494 tick 100 00:05:18.494 tick 250 00:05:18.494 tick 100 00:05:18.494 tick 100 00:05:18.494 test_end 00:05:18.494 00:05:18.494 real 0m1.216s 00:05:18.494 user 0m1.143s 00:05:18.494 sys 0m0.069s 00:05:18.494 01:10:44 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.494 01:10:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:18.494 ************************************ 00:05:18.494 END TEST event_reactor 00:05:18.494 ************************************ 00:05:18.494 01:10:44 event -- common/autotest_common.sh@1142 -- # return 0 00:05:18.494 01:10:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.494 01:10:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:18.494 01:10:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.494 01:10:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.494 ************************************ 00:05:18.494 START TEST event_reactor_perf 00:05:18.494 ************************************ 00:05:18.494 01:10:44 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.494 [2024-07-16 01:10:44.194685] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:18.494 [2024-07-16 01:10:44.194756] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210466 ] 00:05:18.494 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.494 [2024-07-16 01:10:44.253714] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.494 [2024-07-16 01:10:44.324569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.428 test_start 00:05:19.428 test_end 00:05:19.428 Performance: 516055 events per second 00:05:19.428 00:05:19.428 real 0m1.218s 00:05:19.428 user 0m1.141s 00:05:19.428 sys 0m0.073s 00:05:19.428 01:10:45 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.428 01:10:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.428 ************************************ 00:05:19.428 END TEST event_reactor_perf 00:05:19.428 ************************************ 00:05:19.687 01:10:45 event -- common/autotest_common.sh@1142 -- # return 0 00:05:19.687 01:10:45 event -- event/event.sh@49 -- # uname -s 00:05:19.687 01:10:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:19.687 01:10:45 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:19.687 01:10:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.687 01:10:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.687 01:10:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.687 ************************************ 00:05:19.687 START TEST event_scheduler 00:05:19.687 ************************************ 00:05:19.687 01:10:45 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:19.687 * Looking for test storage... 00:05:19.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:19.687 01:10:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:19.687 01:10:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3210744 00:05:19.687 01:10:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.687 01:10:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3210744 00:05:19.687 01:10:45 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3210744 ']' 00:05:19.687 01:10:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:19.687 01:10:45 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.687 01:10:45 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.687 01:10:45 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.687 01:10:45 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.687 01:10:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.687 [2024-07-16 01:10:45.579001] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:19.687 [2024-07-16 01:10:45.579041] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210744 ] 00:05:19.687 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.687 [2024-07-16 01:10:45.629220] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.945 [2024-07-16 01:10:45.708840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.945 [2024-07-16 01:10:45.708926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.945 [2024-07-16 01:10:45.709028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.945 [2024-07-16 01:10:45.709029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.511 01:10:46 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.511 01:10:46 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:20.511 01:10:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:20.511 01:10:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.511 01:10:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.511 [2024-07-16 01:10:46.407431] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:20.511 [2024-07-16 01:10:46.407447] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:20.511 [2024-07-16 01:10:46.407455] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:20.511 [2024-07-16 01:10:46.407463] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:20.511 [2024-07-16 01:10:46.407468] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:20.511 01:10:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.511 01:10:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:20.511 01:10:46 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.511 01:10:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.511 [2024-07-16 01:10:46.477385] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:20.511 01:10:46 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.511 01:10:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:20.511 01:10:46 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.511 01:10:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.511 01:10:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.769 ************************************ 00:05:20.769 START TEST scheduler_create_thread 00:05:20.769 ************************************ 00:05:20.769 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:20.769 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:20.769 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.769 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.769 2 00:05:20.769 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.769 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:20.769 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.769 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.769 3 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.770 4 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.770 5 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.770 6 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.770 7 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.770 8 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.770 9 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.770 10 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.770 01:10:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.142 01:10:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.142 01:10:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:22.142 01:10:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:22.142 01:10:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.142 01:10:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.517 01:10:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.517 00:05:23.517 real 0m2.621s 00:05:23.517 user 0m0.021s 00:05:23.517 sys 0m0.006s 00:05:23.517 01:10:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.517 01:10:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.517 ************************************ 00:05:23.517 END TEST scheduler_create_thread 00:05:23.517 ************************************ 00:05:23.517 01:10:49 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:23.517 01:10:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:23.517 01:10:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3210744 00:05:23.517 01:10:49 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3210744 ']' 00:05:23.517 01:10:49 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3210744 00:05:23.517 01:10:49 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:23.517 01:10:49 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.517 01:10:49 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3210744 00:05:23.517 01:10:49 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:23.517 01:10:49 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:23.517 01:10:49 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3210744' 00:05:23.517 killing process with pid 3210744 00:05:23.517 01:10:49 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3210744 00:05:23.517 01:10:49 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3210744 00:05:23.775 [2024-07-16 01:10:49.615563] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:24.034 00:05:24.034 real 0m4.346s 00:05:24.034 user 0m8.294s 00:05:24.034 sys 0m0.352s 00:05:24.034 01:10:49 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.034 01:10:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.034 ************************************ 00:05:24.034 END TEST event_scheduler 00:05:24.034 ************************************ 00:05:24.034 01:10:49 event -- common/autotest_common.sh@1142 -- # return 0 00:05:24.034 01:10:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:24.034 01:10:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:24.034 01:10:49 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.034 01:10:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.034 01:10:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.034 ************************************ 00:05:24.034 START TEST app_repeat 00:05:24.034 ************************************ 00:05:24.034 01:10:49 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3211490 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3211490' 00:05:24.034 Process app_repeat pid: 3211490 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:24.034 spdk_app_start Round 0 00:05:24.034 01:10:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3211490 /var/tmp/spdk-nbd.sock 00:05:24.034 01:10:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3211490 ']' 00:05:24.034 01:10:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.034 01:10:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.034 01:10:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.034 01:10:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.034 01:10:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.034 [2024-07-16 01:10:49.912704] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:24.034 [2024-07-16 01:10:49.912760] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211490 ] 00:05:24.034 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.034 [2024-07-16 01:10:49.971737] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.292 [2024-07-16 01:10:50.064448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.292 [2024-07-16 01:10:50.064452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.857 01:10:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.857 01:10:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:24.857 01:10:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.136 Malloc0 00:05:25.136 01:10:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.136 Malloc1 00:05:25.136 01:10:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.136 01:10:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.407 /dev/nbd0 00:05:25.407 01:10:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.407 01:10:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.407 1+0 records in 00:05:25.407 1+0 records out 00:05:25.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222302 s, 18.4 MB/s 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:25.407 01:10:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:25.407 01:10:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.407 01:10:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.407 01:10:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.664 /dev/nbd1 00:05:25.664 01:10:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.664 01:10:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.664 1+0 records in 00:05:25.664 1+0 records out 00:05:25.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189283 s, 21.6 MB/s 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:25.664 01:10:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:25.664 01:10:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.664 01:10:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.664 01:10:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.664 01:10:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.664 01:10:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.922 { 00:05:25.922 "nbd_device": "/dev/nbd0", 00:05:25.922 "bdev_name": "Malloc0" 00:05:25.922 }, 00:05:25.922 { 00:05:25.922 "nbd_device": "/dev/nbd1", 00:05:25.922 "bdev_name": "Malloc1" 00:05:25.922 } 00:05:25.922 ]' 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.922 { 00:05:25.922 "nbd_device": "/dev/nbd0", 00:05:25.922 "bdev_name": "Malloc0" 00:05:25.922 }, 00:05:25.922 { 00:05:25.922 "nbd_device": "/dev/nbd1", 00:05:25.922 "bdev_name": "Malloc1" 00:05:25.922 } 00:05:25.922 ]' 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.922 /dev/nbd1' 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.922 /dev/nbd1' 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.922 256+0 records in 00:05:25.922 256+0 records out 00:05:25.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00319556 s, 328 MB/s 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.922 256+0 records in 00:05:25.922 256+0 records out 00:05:25.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133334 s, 78.6 MB/s 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.922 256+0 records in 00:05:25.922 256+0 records out 00:05:25.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143092 s, 73.3 MB/s 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.922 01:10:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.180 01:10:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.180 01:10:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.180 01:10:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.180 01:10:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.180 01:10:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.180 01:10:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.180 01:10:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.180 01:10:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.180 01:10:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.180 01:10:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.180 01:10:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.180 01:10:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.180 01:10:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.180 01:10:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.180 01:10:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.180 01:10:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.180 01:10:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.180 01:10:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.180 01:10:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.180 01:10:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.180 01:10:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.437 01:10:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.437 01:10:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.694 01:10:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.952 [2024-07-16 01:10:52.730357] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.952 [2024-07-16 01:10:52.803225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.952 [2024-07-16 01:10:52.803227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.952 [2024-07-16 01:10:52.844465] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.952 [2024-07-16 01:10:52.844507] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.228 01:10:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.228 01:10:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:30.228 spdk_app_start Round 1 00:05:30.228 01:10:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3211490 /var/tmp/spdk-nbd.sock 00:05:30.228 01:10:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3211490 ']' 00:05:30.228 01:10:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.228 01:10:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.228 01:10:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.228 01:10:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.228 01:10:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.228 01:10:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.228 01:10:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:30.228 01:10:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.228 Malloc0 00:05:30.228 01:10:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.228 Malloc1 00:05:30.228 01:10:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.228 01:10:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.485 /dev/nbd0 00:05:30.485 01:10:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.485 01:10:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.485 01:10:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:30.485 01:10:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.485 01:10:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.485 01:10:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.486 1+0 records in 00:05:30.486 1+0 records out 00:05:30.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000115498 s, 35.5 MB/s 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.486 01:10:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.486 01:10:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.486 01:10:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.486 /dev/nbd1 00:05:30.486 01:10:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.486 01:10:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.486 1+0 records in 00:05:30.486 1+0 records out 00:05:30.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249263 s, 16.4 MB/s 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:30.486 01:10:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:30.486 01:10:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.743 { 00:05:30.743 "nbd_device": "/dev/nbd0", 00:05:30.743 "bdev_name": "Malloc0" 00:05:30.743 }, 00:05:30.743 { 00:05:30.743 "nbd_device": "/dev/nbd1", 00:05:30.743 "bdev_name": "Malloc1" 00:05:30.743 } 00:05:30.743 ]' 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.743 { 00:05:30.743 "nbd_device": "/dev/nbd0", 00:05:30.743 "bdev_name": "Malloc0" 00:05:30.743 }, 00:05:30.743 { 00:05:30.743 "nbd_device": "/dev/nbd1", 00:05:30.743 "bdev_name": "Malloc1" 00:05:30.743 } 00:05:30.743 ]' 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.743 /dev/nbd1' 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.743 /dev/nbd1' 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.743 256+0 records in 00:05:30.743 256+0 records out 00:05:30.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103377 s, 101 MB/s 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.743 256+0 records in 00:05:30.743 256+0 records out 00:05:30.743 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130335 s, 80.5 MB/s 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.743 01:10:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.000 256+0 records in 00:05:31.000 256+0 records out 00:05:31.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143751 s, 72.9 MB/s 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.000 01:10:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.257 01:10:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.257 01:10:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.257 01:10:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.257 01:10:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.257 01:10:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.257 01:10:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.257 01:10:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.257 01:10:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.257 01:10:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.257 01:10:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.257 01:10:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.515 01:10:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.515 01:10:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.772 01:10:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.773 [2024-07-16 01:10:57.746743] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.030 [2024-07-16 01:10:57.813883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.030 [2024-07-16 01:10:57.813885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.030 [2024-07-16 01:10:57.855798] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.030 [2024-07-16 01:10:57.855851] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.312 01:11:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.312 01:11:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.312 spdk_app_start Round 2 00:05:35.312 01:11:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3211490 /var/tmp/spdk-nbd.sock 00:05:35.312 01:11:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3211490 ']' 00:05:35.312 01:11:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.312 01:11:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.312 01:11:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.312 01:11:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.312 01:11:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.312 01:11:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.312 01:11:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:35.312 01:11:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.312 Malloc0 00:05:35.312 01:11:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.312 Malloc1 00:05:35.312 01:11:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.312 /dev/nbd0 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.312 01:11:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.312 1+0 records in 00:05:35.312 1+0 records out 00:05:35.312 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177074 s, 23.1 MB/s 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.312 01:11:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:35.571 01:11:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.571 01:11:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.571 01:11:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.571 /dev/nbd1 00:05:35.571 01:11:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.571 01:11:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.571 1+0 records in 00:05:35.571 1+0 records out 00:05:35.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183367 s, 22.3 MB/s 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.571 01:11:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:35.571 01:11:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.571 01:11:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.571 01:11:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.571 01:11:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.571 01:11:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.830 { 00:05:35.830 "nbd_device": "/dev/nbd0", 00:05:35.830 "bdev_name": "Malloc0" 00:05:35.830 }, 00:05:35.830 { 00:05:35.830 "nbd_device": "/dev/nbd1", 00:05:35.830 "bdev_name": "Malloc1" 00:05:35.830 } 00:05:35.830 ]' 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.830 { 00:05:35.830 "nbd_device": "/dev/nbd0", 00:05:35.830 "bdev_name": "Malloc0" 00:05:35.830 }, 00:05:35.830 { 00:05:35.830 "nbd_device": "/dev/nbd1", 00:05:35.830 "bdev_name": "Malloc1" 00:05:35.830 } 00:05:35.830 ]' 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.830 /dev/nbd1' 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.830 /dev/nbd1' 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.830 256+0 records in 00:05:35.830 256+0 records out 00:05:35.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01026 s, 102 MB/s 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.830 256+0 records in 00:05:35.830 256+0 records out 00:05:35.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135422 s, 77.4 MB/s 00:05:35.830 01:11:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.831 256+0 records in 00:05:35.831 256+0 records out 00:05:35.831 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145755 s, 71.9 MB/s 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.831 01:11:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.089 01:11:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.089 01:11:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.089 01:11:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.089 01:11:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.089 01:11:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.089 01:11:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.089 01:11:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.089 01:11:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.089 01:11:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.089 01:11:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.348 01:11:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.348 01:11:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.348 01:11:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.348 01:11:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.348 01:11:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.348 01:11:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.348 01:11:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.348 01:11:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.348 01:11:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.348 01:11:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.348 01:11:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.608 01:11:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.608 01:11:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.866 01:11:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.866 [2024-07-16 01:11:02.788735] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.124 [2024-07-16 01:11:02.854535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.124 [2024-07-16 01:11:02.854536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.124 [2024-07-16 01:11:02.895740] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.124 [2024-07-16 01:11:02.895784] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:39.655 01:11:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3211490 /var/tmp/spdk-nbd.sock 00:05:39.655 01:11:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3211490 ']' 00:05:39.655 01:11:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.655 01:11:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.655 01:11:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.655 01:11:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.655 01:11:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:39.914 01:11:05 event.app_repeat -- event/event.sh@39 -- # killprocess 3211490 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3211490 ']' 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3211490 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3211490 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3211490' 00:05:39.914 killing process with pid 3211490 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3211490 00:05:39.914 01:11:05 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3211490 00:05:40.174 spdk_app_start is called in Round 0. 00:05:40.174 Shutdown signal received, stop current app iteration 00:05:40.174 Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 reinitialization... 00:05:40.174 spdk_app_start is called in Round 1. 00:05:40.174 Shutdown signal received, stop current app iteration 00:05:40.174 Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 reinitialization... 00:05:40.174 spdk_app_start is called in Round 2. 00:05:40.174 Shutdown signal received, stop current app iteration 00:05:40.174 Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 reinitialization... 00:05:40.174 spdk_app_start is called in Round 3. 00:05:40.174 Shutdown signal received, stop current app iteration 00:05:40.174 01:11:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:40.174 01:11:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:40.174 00:05:40.174 real 0m16.109s 00:05:40.174 user 0m34.840s 00:05:40.174 sys 0m2.295s 00:05:40.174 01:11:05 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.174 01:11:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.174 ************************************ 00:05:40.174 END TEST app_repeat 00:05:40.174 ************************************ 00:05:40.174 01:11:06 event -- common/autotest_common.sh@1142 -- # return 0 00:05:40.174 01:11:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:40.174 01:11:06 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:40.174 01:11:06 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.174 01:11:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.174 01:11:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.174 ************************************ 00:05:40.174 START TEST cpu_locks 00:05:40.174 ************************************ 00:05:40.174 01:11:06 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:40.174 * Looking for test storage... 00:05:40.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:40.174 01:11:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:40.174 01:11:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:40.174 01:11:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:40.174 01:11:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:40.174 01:11:06 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.174 01:11:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.174 01:11:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.433 ************************************ 00:05:40.433 START TEST default_locks 00:05:40.433 ************************************ 00:05:40.433 01:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:40.433 01:11:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3214475 00:05:40.433 01:11:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3214475 00:05:40.433 01:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3214475 ']' 00:05:40.433 01:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.433 01:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.433 01:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.433 01:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.433 01:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.433 01:11:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.433 [2024-07-16 01:11:06.215244] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:40.433 [2024-07-16 01:11:06.215288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3214475 ] 00:05:40.433 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.433 [2024-07-16 01:11:06.270469] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.433 [2024-07-16 01:11:06.349282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.369 01:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.369 01:11:06 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:41.369 01:11:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3214475 00:05:41.369 01:11:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3214475 00:05:41.369 01:11:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.369 lslocks: write error 00:05:41.369 01:11:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3214475 00:05:41.369 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3214475 ']' 00:05:41.369 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3214475 00:05:41.369 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:41.369 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.369 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3214475 00:05:41.369 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.369 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.369 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3214475' 00:05:41.369 killing process with pid 3214475 00:05:41.369 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3214475 00:05:41.369 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3214475 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3214475 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3214475 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3214475 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3214475 ']' 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3214475) - No such process 00:05:41.628 ERROR: process (pid: 3214475) is no longer running 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:41.628 00:05:41.628 real 0m1.286s 00:05:41.628 user 0m1.339s 00:05:41.628 sys 0m0.388s 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.628 01:11:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.628 ************************************ 00:05:41.628 END TEST default_locks 00:05:41.628 ************************************ 00:05:41.628 01:11:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:41.628 01:11:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:41.628 01:11:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.628 01:11:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.628 01:11:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.628 ************************************ 00:05:41.628 START TEST default_locks_via_rpc 00:05:41.628 ************************************ 00:05:41.628 01:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:41.628 01:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3214735 00:05:41.628 01:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3214735 00:05:41.628 01:11:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.629 01:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3214735 ']' 00:05:41.629 01:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.629 01:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.629 01:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.629 01:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.629 01:11:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.629 [2024-07-16 01:11:07.567117] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:41.629 [2024-07-16 01:11:07.567161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3214735 ] 00:05:41.629 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.888 [2024-07-16 01:11:07.621711] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.888 [2024-07-16 01:11:07.700664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.464 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.464 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:42.464 01:11:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:42.464 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.464 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.464 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.464 01:11:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:42.464 01:11:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:42.465 01:11:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:42.465 01:11:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:42.465 01:11:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:42.465 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.465 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.465 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.465 01:11:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3214735 00:05:42.465 01:11:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3214735 00:05:42.465 01:11:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.031 01:11:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3214735 00:05:43.031 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3214735 ']' 00:05:43.031 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3214735 00:05:43.031 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:43.031 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.031 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3214735 00:05:43.031 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.031 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.031 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3214735' 00:05:43.031 killing process with pid 3214735 00:05:43.031 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3214735 00:05:43.031 01:11:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3214735 00:05:43.289 00:05:43.289 real 0m1.569s 00:05:43.289 user 0m1.652s 00:05:43.289 sys 0m0.496s 00:05:43.289 01:11:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.289 01:11:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.289 ************************************ 00:05:43.289 END TEST default_locks_via_rpc 00:05:43.289 ************************************ 00:05:43.289 01:11:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.289 01:11:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:43.289 01:11:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.289 01:11:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.289 01:11:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.289 ************************************ 00:05:43.289 START TEST non_locking_app_on_locked_coremask 00:05:43.289 ************************************ 00:05:43.289 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:43.289 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.289 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3215089 00:05:43.289 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3215089 /var/tmp/spdk.sock 00:05:43.289 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3215089 ']' 00:05:43.289 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.289 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.289 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.289 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.289 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.289 [2024-07-16 01:11:09.184192] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:43.289 [2024-07-16 01:11:09.184229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215089 ] 00:05:43.289 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.289 [2024-07-16 01:11:09.238869] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.548 [2024-07-16 01:11:09.318168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.112 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.112 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:44.112 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3215231 00:05:44.112 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3215231 /var/tmp/spdk2.sock 00:05:44.112 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:44.112 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3215231 ']' 00:05:44.112 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.112 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.112 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.112 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.112 01:11:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.112 [2024-07-16 01:11:10.036586] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:44.112 [2024-07-16 01:11:10.036638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215231 ] 00:05:44.112 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.370 [2024-07-16 01:11:10.114298] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.370 [2024-07-16 01:11:10.114323] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.370 [2024-07-16 01:11:10.259689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.936 01:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.936 01:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:44.936 01:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3215089 00:05:44.936 01:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3215089 00:05:44.936 01:11:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.502 lslocks: write error 00:05:45.502 01:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3215089 00:05:45.502 01:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3215089 ']' 00:05:45.502 01:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3215089 00:05:45.502 01:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:45.502 01:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.502 01:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3215089 00:05:45.502 01:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.502 01:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.502 01:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3215089' 00:05:45.502 killing process with pid 3215089 00:05:45.502 01:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3215089 00:05:45.502 01:11:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3215089 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3215231 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3215231 ']' 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3215231 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3215231 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3215231' 00:05:46.436 killing process with pid 3215231 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3215231 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3215231 00:05:46.436 00:05:46.436 real 0m3.265s 00:05:46.436 user 0m3.493s 00:05:46.436 sys 0m0.920s 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.436 01:11:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.436 ************************************ 00:05:46.436 END TEST non_locking_app_on_locked_coremask 00:05:46.436 ************************************ 00:05:46.694 01:11:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:46.694 01:11:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:46.694 01:11:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.694 01:11:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.694 01:11:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.694 ************************************ 00:05:46.694 START TEST locking_app_on_unlocked_coremask 00:05:46.694 ************************************ 00:05:46.694 01:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:46.694 01:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3215718 00:05:46.694 01:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3215718 /var/tmp/spdk.sock 00:05:46.694 01:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:46.694 01:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3215718 ']' 00:05:46.694 01:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.694 01:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.694 01:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.694 01:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.694 01:11:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.694 [2024-07-16 01:11:12.525267] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:46.694 [2024-07-16 01:11:12.525311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215718 ] 00:05:46.694 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.694 [2024-07-16 01:11:12.579056] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.694 [2024-07-16 01:11:12.579081] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.694 [2024-07-16 01:11:12.646407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.628 01:11:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.628 01:11:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:47.628 01:11:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3215872 00:05:47.628 01:11:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3215872 /var/tmp/spdk2.sock 00:05:47.628 01:11:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.628 01:11:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3215872 ']' 00:05:47.628 01:11:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.628 01:11:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.628 01:11:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.628 01:11:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.628 01:11:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.628 [2024-07-16 01:11:13.365234] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:47.628 [2024-07-16 01:11:13.365285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215872 ] 00:05:47.628 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.628 [2024-07-16 01:11:13.442906] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.628 [2024-07-16 01:11:13.586864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.192 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.192 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:48.192 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3215872 00:05:48.192 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3215872 00:05:48.192 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.779 lslocks: write error 00:05:48.779 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3215718 00:05:48.779 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3215718 ']' 00:05:48.779 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3215718 00:05:48.779 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:48.779 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.779 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3215718 00:05:48.779 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.779 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.779 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3215718' 00:05:48.779 killing process with pid 3215718 00:05:48.779 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3215718 00:05:48.779 01:11:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3215718 00:05:49.345 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3215872 00:05:49.345 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3215872 ']' 00:05:49.345 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3215872 00:05:49.345 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:49.345 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.345 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3215872 00:05:49.345 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.345 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.345 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3215872' 00:05:49.345 killing process with pid 3215872 00:05:49.345 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3215872 00:05:49.345 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3215872 00:05:49.603 00:05:49.603 real 0m2.982s 00:05:49.603 user 0m3.220s 00:05:49.603 sys 0m0.813s 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.603 ************************************ 00:05:49.603 END TEST locking_app_on_unlocked_coremask 00:05:49.603 ************************************ 00:05:49.603 01:11:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:49.603 01:11:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:49.603 01:11:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.603 01:11:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.603 01:11:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.603 ************************************ 00:05:49.603 START TEST locking_app_on_locked_coremask 00:05:49.603 ************************************ 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3216222 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3216222 /var/tmp/spdk.sock 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3216222 ']' 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.603 01:11:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.603 [2024-07-16 01:11:15.571408] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:49.603 [2024-07-16 01:11:15.571453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216222 ] 00:05:49.862 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.862 [2024-07-16 01:11:15.627998] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.862 [2024-07-16 01:11:15.695024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3216449 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3216449 /var/tmp/spdk2.sock 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3216449 /var/tmp/spdk2.sock 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3216449 /var/tmp/spdk2.sock 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3216449 ']' 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.428 01:11:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.428 [2024-07-16 01:11:16.401453] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:50.428 [2024-07-16 01:11:16.401499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216449 ] 00:05:50.686 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.686 [2024-07-16 01:11:16.477620] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3216222 has claimed it. 00:05:50.686 [2024-07-16 01:11:16.477658] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:51.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3216449) - No such process 00:05:51.252 ERROR: process (pid: 3216449) is no longer running 00:05:51.252 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.252 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:51.252 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:51.252 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.252 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.252 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.252 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3216222 00:05:51.252 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3216222 00:05:51.252 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.510 lslocks: write error 00:05:51.510 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3216222 00:05:51.510 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3216222 ']' 00:05:51.510 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3216222 00:05:51.510 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:51.510 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.510 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3216222 00:05:51.769 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.769 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.769 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3216222' 00:05:51.769 killing process with pid 3216222 00:05:51.769 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3216222 00:05:51.769 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3216222 00:05:52.062 00:05:52.062 real 0m2.298s 00:05:52.062 user 0m2.516s 00:05:52.062 sys 0m0.643s 00:05:52.062 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.062 01:11:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.062 ************************************ 00:05:52.062 END TEST locking_app_on_locked_coremask 00:05:52.062 ************************************ 00:05:52.062 01:11:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:52.062 01:11:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:52.062 01:11:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.062 01:11:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.062 01:11:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.062 ************************************ 00:05:52.062 START TEST locking_overlapped_coremask 00:05:52.062 ************************************ 00:05:52.062 01:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:52.062 01:11:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3216714 00:05:52.063 01:11:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3216714 /var/tmp/spdk.sock 00:05:52.063 01:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3216714 ']' 00:05:52.063 01:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.063 01:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.063 01:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.063 01:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.063 01:11:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.063 01:11:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:52.063 [2024-07-16 01:11:17.929211] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:52.063 [2024-07-16 01:11:17.929250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216714 ] 00:05:52.063 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.063 [2024-07-16 01:11:17.984232] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.368 [2024-07-16 01:11:18.067261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.368 [2024-07-16 01:11:18.067277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.368 [2024-07-16 01:11:18.067279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3216835 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3216835 /var/tmp/spdk2.sock 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3216835 /var/tmp/spdk2.sock 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3216835 /var/tmp/spdk2.sock 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3216835 ']' 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.935 01:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.935 [2024-07-16 01:11:18.790926] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:52.935 [2024-07-16 01:11:18.790977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216835 ] 00:05:52.935 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.935 [2024-07-16 01:11:18.872922] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3216714 has claimed it. 00:05:52.935 [2024-07-16 01:11:18.872960] app.c: 907:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3216835) - No such process 00:05:53.502 ERROR: process (pid: 3216835) is no longer running 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3216714 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3216714 ']' 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3216714 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3216714 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.502 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.503 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3216714' 00:05:53.503 killing process with pid 3216714 00:05:53.503 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3216714 00:05:53.503 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3216714 00:05:54.070 00:05:54.070 real 0m1.896s 00:05:54.070 user 0m5.364s 00:05:54.070 sys 0m0.407s 00:05:54.070 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.070 01:11:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.070 ************************************ 00:05:54.070 END TEST locking_overlapped_coremask 00:05:54.070 ************************************ 00:05:54.070 01:11:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:54.070 01:11:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:54.070 01:11:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.070 01:11:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.070 01:11:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.070 ************************************ 00:05:54.070 START TEST locking_overlapped_coremask_via_rpc 00:05:54.070 ************************************ 00:05:54.071 01:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:54.071 01:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3216990 00:05:54.071 01:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:54.071 01:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3216990 /var/tmp/spdk.sock 00:05:54.071 01:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3216990 ']' 00:05:54.071 01:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.071 01:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.071 01:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.071 01:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.071 01:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.071 [2024-07-16 01:11:19.887046] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:54.071 [2024-07-16 01:11:19.887090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216990 ] 00:05:54.071 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.071 [2024-07-16 01:11:19.943553] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.071 [2024-07-16 01:11:19.943577] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.071 [2024-07-16 01:11:20.016390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.071 [2024-07-16 01:11:20.016406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.071 [2024-07-16 01:11:20.016408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.004 01:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.004 01:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.004 01:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:55.004 01:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3217219 00:05:55.004 01:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3217219 /var/tmp/spdk2.sock 00:05:55.004 01:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3217219 ']' 00:05:55.004 01:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.004 01:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.004 01:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.004 01:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.004 01:11:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.004 [2024-07-16 01:11:20.716234] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:55.004 [2024-07-16 01:11:20.716281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217219 ] 00:05:55.004 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.004 [2024-07-16 01:11:20.794371] app.c: 911:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.004 [2024-07-16 01:11:20.794400] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.004 [2024-07-16 01:11:20.943917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.004 [2024-07-16 01:11:20.944033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.004 [2024-07-16 01:11:20.944034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.570 [2024-07-16 01:11:21.546408] app.c: 776:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3216990 has claimed it. 00:05:55.570 request: 00:05:55.570 { 00:05:55.570 "method": "framework_enable_cpumask_locks", 00:05:55.570 "req_id": 1 00:05:55.570 } 00:05:55.570 Got JSON-RPC error response 00:05:55.570 response: 00:05:55.570 { 00:05:55.570 "code": -32603, 00:05:55.570 "message": "Failed to claim CPU core: 2" 00:05:55.570 } 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.570 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3216990 /var/tmp/spdk.sock 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3216990 ']' 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3217219 /var/tmp/spdk2.sock 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3217219 ']' 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.828 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.086 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.086 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.086 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:56.086 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.086 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.086 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.086 00:05:56.086 real 0m2.101s 00:05:56.086 user 0m0.892s 00:05:56.086 sys 0m0.145s 00:05:56.086 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.086 01:11:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.086 ************************************ 00:05:56.086 END TEST locking_overlapped_coremask_via_rpc 00:05:56.086 ************************************ 00:05:56.086 01:11:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:56.086 01:11:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:56.086 01:11:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3216990 ]] 00:05:56.086 01:11:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3216990 00:05:56.087 01:11:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3216990 ']' 00:05:56.087 01:11:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3216990 00:05:56.087 01:11:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:56.087 01:11:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.087 01:11:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3216990 00:05:56.087 01:11:22 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.087 01:11:22 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.087 01:11:22 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3216990' 00:05:56.087 killing process with pid 3216990 00:05:56.087 01:11:22 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3216990 00:05:56.087 01:11:22 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3216990 00:05:56.652 01:11:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3217219 ]] 00:05:56.652 01:11:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3217219 00:05:56.652 01:11:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3217219 ']' 00:05:56.652 01:11:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3217219 00:05:56.652 01:11:22 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:56.652 01:11:22 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.652 01:11:22 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3217219 00:05:56.652 01:11:22 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:56.652 01:11:22 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:56.652 01:11:22 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3217219' 00:05:56.652 killing process with pid 3217219 00:05:56.652 01:11:22 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3217219 00:05:56.652 01:11:22 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3217219 00:05:56.910 01:11:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:56.910 01:11:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:56.910 01:11:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3216990 ]] 00:05:56.910 01:11:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3216990 00:05:56.910 01:11:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3216990 ']' 00:05:56.910 01:11:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3216990 00:05:56.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3216990) - No such process 00:05:56.910 01:11:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3216990 is not found' 00:05:56.910 Process with pid 3216990 is not found 00:05:56.910 01:11:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3217219 ]] 00:05:56.910 01:11:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3217219 00:05:56.910 01:11:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3217219 ']' 00:05:56.910 01:11:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3217219 00:05:56.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3217219) - No such process 00:05:56.910 01:11:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3217219 is not found' 00:05:56.910 Process with pid 3217219 is not found 00:05:56.910 01:11:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:56.910 00:05:56.910 real 0m16.647s 00:05:56.910 user 0m29.039s 00:05:56.910 sys 0m4.662s 00:05:56.910 01:11:22 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.910 01:11:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.910 ************************************ 00:05:56.910 END TEST cpu_locks 00:05:56.910 ************************************ 00:05:56.910 01:11:22 event -- common/autotest_common.sh@1142 -- # return 0 00:05:56.910 00:05:56.910 real 0m41.226s 00:05:56.910 user 1m18.780s 00:05:56.910 sys 0m7.851s 00:05:56.910 01:11:22 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.910 01:11:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.910 ************************************ 00:05:56.910 END TEST event 00:05:56.910 ************************************ 00:05:56.910 01:11:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:56.910 01:11:22 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:56.910 01:11:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.910 01:11:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.910 01:11:22 -- common/autotest_common.sh@10 -- # set +x 00:05:56.910 ************************************ 00:05:56.910 START TEST thread 00:05:56.910 ************************************ 00:05:56.910 01:11:22 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:56.910 * Looking for test storage... 00:05:56.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:56.910 01:11:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:56.910 01:11:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:56.910 01:11:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.910 01:11:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.168 ************************************ 00:05:57.168 START TEST thread_poller_perf 00:05:57.168 ************************************ 00:05:57.168 01:11:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.168 [2024-07-16 01:11:22.942228] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:57.168 [2024-07-16 01:11:22.942300] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217700 ] 00:05:57.168 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.168 [2024-07-16 01:11:23.003624] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.168 [2024-07-16 01:11:23.079424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.168 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:58.545 ====================================== 00:05:58.545 busy:2105609104 (cyc) 00:05:58.545 total_run_count: 426000 00:05:58.545 tsc_hz: 2100000000 (cyc) 00:05:58.545 ====================================== 00:05:58.545 poller_cost: 4942 (cyc), 2353 (nsec) 00:05:58.545 00:05:58.545 real 0m1.234s 00:05:58.545 user 0m1.157s 00:05:58.545 sys 0m0.073s 00:05:58.545 01:11:24 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.545 01:11:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.545 ************************************ 00:05:58.545 END TEST thread_poller_perf 00:05:58.545 ************************************ 00:05:58.545 01:11:24 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:58.545 01:11:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:58.545 01:11:24 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:58.545 01:11:24 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.545 01:11:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.545 ************************************ 00:05:58.545 START TEST thread_poller_perf 00:05:58.545 ************************************ 00:05:58.545 01:11:24 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:58.545 [2024-07-16 01:11:24.233967] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:58.545 [2024-07-16 01:11:24.234039] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3217909 ] 00:05:58.545 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.545 [2024-07-16 01:11:24.292377] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.545 [2024-07-16 01:11:24.367535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.545 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:59.482 ====================================== 00:05:59.482 busy:2101497440 (cyc) 00:05:59.482 total_run_count: 5550000 00:05:59.482 tsc_hz: 2100000000 (cyc) 00:05:59.482 ====================================== 00:05:59.482 poller_cost: 378 (cyc), 180 (nsec) 00:05:59.482 00:05:59.482 real 0m1.222s 00:05:59.482 user 0m1.149s 00:05:59.482 sys 0m0.069s 00:05:59.482 01:11:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.482 01:11:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.482 ************************************ 00:05:59.482 END TEST thread_poller_perf 00:05:59.482 ************************************ 00:05:59.482 01:11:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:59.482 01:11:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:59.482 00:05:59.482 real 0m2.666s 00:05:59.482 user 0m2.389s 00:05:59.482 sys 0m0.287s 00:05:59.482 01:11:25 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.482 01:11:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.482 ************************************ 00:05:59.482 END TEST thread 00:05:59.482 ************************************ 00:05:59.741 01:11:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.741 01:11:25 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:59.741 01:11:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.741 01:11:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.741 01:11:25 -- common/autotest_common.sh@10 -- # set +x 00:05:59.741 ************************************ 00:05:59.741 START TEST accel 00:05:59.741 ************************************ 00:05:59.741 01:11:25 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:59.741 * Looking for test storage... 00:05:59.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:59.741 01:11:25 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:59.741 01:11:25 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:59.741 01:11:25 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:59.741 01:11:25 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3218209 00:05:59.741 01:11:25 accel -- accel/accel.sh@63 -- # waitforlisten 3218209 00:05:59.741 01:11:25 accel -- common/autotest_common.sh@829 -- # '[' -z 3218209 ']' 00:05:59.741 01:11:25 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:59.741 01:11:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.741 01:11:25 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:59.741 01:11:25 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.741 01:11:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.741 01:11:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.741 01:11:25 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.741 01:11:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.741 01:11:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.741 01:11:25 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.741 01:11:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:59.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.741 01:11:25 accel -- accel/accel.sh@41 -- # jq -r . 00:05:59.741 01:11:25 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.741 01:11:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.741 [2024-07-16 01:11:25.656305] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:05:59.741 [2024-07-16 01:11:25.656367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218209 ] 00:05:59.741 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.741 [2024-07-16 01:11:25.709856] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.999 [2024-07-16 01:11:25.790721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@862 -- # return 0 00:06:00.566 01:11:26 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:00.566 01:11:26 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:00.566 01:11:26 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:00.566 01:11:26 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:00.566 01:11:26 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:00.566 01:11:26 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:00.566 01:11:26 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # IFS== 00:06:00.566 01:11:26 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:00.566 01:11:26 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:00.566 01:11:26 accel -- accel/accel.sh@75 -- # killprocess 3218209 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@948 -- # '[' -z 3218209 ']' 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@952 -- # kill -0 3218209 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@953 -- # uname 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3218209 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3218209' 00:06:00.566 killing process with pid 3218209 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@967 -- # kill 3218209 00:06:00.566 01:11:26 accel -- common/autotest_common.sh@972 -- # wait 3218209 00:06:01.133 01:11:26 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:01.133 01:11:26 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:01.133 01:11:26 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:01.133 01:11:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.133 01:11:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.133 01:11:26 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:01.133 01:11:26 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:01.133 01:11:26 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:01.133 01:11:26 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.133 01:11:26 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.133 01:11:26 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.133 01:11:26 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.133 01:11:26 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.133 01:11:26 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:01.133 01:11:26 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:01.133 01:11:26 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.133 01:11:26 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:01.133 01:11:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.133 01:11:26 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:01.133 01:11:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:01.133 01:11:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.133 01:11:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.133 ************************************ 00:06:01.133 START TEST accel_missing_filename 00:06:01.133 ************************************ 00:06:01.133 01:11:26 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:01.133 01:11:26 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:01.133 01:11:26 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:01.133 01:11:26 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:01.133 01:11:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.133 01:11:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:01.133 01:11:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.133 01:11:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:01.133 01:11:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:01.133 01:11:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:01.133 01:11:26 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.133 01:11:26 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.133 01:11:26 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.133 01:11:26 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.133 01:11:26 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.133 01:11:26 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:01.133 01:11:26 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:01.133 [2024-07-16 01:11:26.996112] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:01.133 [2024-07-16 01:11:26.996177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218477 ] 00:06:01.133 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.133 [2024-07-16 01:11:27.057699] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.391 [2024-07-16 01:11:27.135199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.391 [2024-07-16 01:11:27.175953] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.391 [2024-07-16 01:11:27.235532] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:01.391 A filename is required. 00:06:01.391 01:11:27 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:01.391 01:11:27 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.391 01:11:27 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:01.391 01:11:27 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:01.391 01:11:27 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:01.391 01:11:27 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.391 00:06:01.391 real 0m0.341s 00:06:01.391 user 0m0.257s 00:06:01.391 sys 0m0.122s 00:06:01.391 01:11:27 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.391 01:11:27 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:01.391 ************************************ 00:06:01.391 END TEST accel_missing_filename 00:06:01.391 ************************************ 00:06:01.391 01:11:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.391 01:11:27 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.391 01:11:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:01.391 01:11:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.391 01:11:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.391 ************************************ 00:06:01.391 START TEST accel_compress_verify 00:06:01.391 ************************************ 00:06:01.391 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.391 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:01.391 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.391 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:01.391 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.391 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:01.391 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.391 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.391 01:11:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:01.391 01:11:27 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.391 01:11:27 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.391 01:11:27 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.392 01:11:27 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.392 01:11:27 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.392 01:11:27 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.392 01:11:27 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:01.392 01:11:27 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:01.650 [2024-07-16 01:11:27.403651] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:01.650 [2024-07-16 01:11:27.403725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218608 ] 00:06:01.650 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.650 [2024-07-16 01:11:27.461020] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.650 [2024-07-16 01:11:27.531386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.650 [2024-07-16 01:11:27.571872] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.650 [2024-07-16 01:11:27.630962] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:01.908 00:06:01.908 Compression does not support the verify option, aborting. 00:06:01.908 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:01.908 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.908 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:01.908 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:01.908 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:01.908 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.908 00:06:01.908 real 0m0.329s 00:06:01.908 user 0m0.235s 00:06:01.908 sys 0m0.116s 00:06:01.908 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.908 01:11:27 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:01.908 ************************************ 00:06:01.908 END TEST accel_compress_verify 00:06:01.908 ************************************ 00:06:01.908 01:11:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.909 01:11:27 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:01.909 01:11:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:01.909 01:11:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.909 01:11:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.909 ************************************ 00:06:01.909 START TEST accel_wrong_workload 00:06:01.909 ************************************ 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:01.909 01:11:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:01.909 01:11:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:01.909 01:11:27 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.909 01:11:27 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.909 01:11:27 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.909 01:11:27 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.909 01:11:27 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.909 01:11:27 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:01.909 01:11:27 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:01.909 Unsupported workload type: foobar 00:06:01.909 [2024-07-16 01:11:27.783407] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:01.909 accel_perf options: 00:06:01.909 [-h help message] 00:06:01.909 [-q queue depth per core] 00:06:01.909 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:01.909 [-T number of threads per core 00:06:01.909 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:01.909 [-t time in seconds] 00:06:01.909 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:01.909 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:01.909 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:01.909 [-l for compress/decompress workloads, name of uncompressed input file 00:06:01.909 [-S for crc32c workload, use this seed value (default 0) 00:06:01.909 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:01.909 [-f for fill workload, use this BYTE value (default 255) 00:06:01.909 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:01.909 [-y verify result if this switch is on] 00:06:01.909 [-a tasks to allocate per core (default: same value as -q)] 00:06:01.909 Can be used to spread operations across a wider range of memory. 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.909 00:06:01.909 real 0m0.031s 00:06:01.909 user 0m0.021s 00:06:01.909 sys 0m0.010s 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.909 01:11:27 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:01.909 ************************************ 00:06:01.909 END TEST accel_wrong_workload 00:06:01.909 ************************************ 00:06:01.909 Error: writing output failed: Broken pipe 00:06:01.909 01:11:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.909 01:11:27 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:01.909 01:11:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:01.909 01:11:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.909 01:11:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.909 ************************************ 00:06:01.909 START TEST accel_negative_buffers 00:06:01.909 ************************************ 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:01.909 01:11:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:01.909 01:11:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:01.909 01:11:27 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.909 01:11:27 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.909 01:11:27 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.909 01:11:27 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.909 01:11:27 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.909 01:11:27 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:01.909 01:11:27 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:01.909 -x option must be non-negative. 00:06:01.909 [2024-07-16 01:11:27.882698] app.c:1460:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:01.909 accel_perf options: 00:06:01.909 [-h help message] 00:06:01.909 [-q queue depth per core] 00:06:01.909 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:01.909 [-T number of threads per core 00:06:01.909 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:01.909 [-t time in seconds] 00:06:01.909 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:01.909 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:01.909 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:01.909 [-l for compress/decompress workloads, name of uncompressed input file 00:06:01.909 [-S for crc32c workload, use this seed value (default 0) 00:06:01.909 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:01.909 [-f for fill workload, use this BYTE value (default 255) 00:06:01.909 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:01.909 [-y verify result if this switch is on] 00:06:01.909 [-a tasks to allocate per core (default: same value as -q)] 00:06:01.909 Can be used to spread operations across a wider range of memory. 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.909 00:06:01.909 real 0m0.032s 00:06:01.909 user 0m0.019s 00:06:01.909 sys 0m0.013s 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.909 01:11:27 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:01.909 ************************************ 00:06:01.909 END TEST accel_negative_buffers 00:06:01.909 ************************************ 00:06:02.169 Error: writing output failed: Broken pipe 00:06:02.169 01:11:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.169 01:11:27 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:02.169 01:11:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:02.169 01:11:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.169 01:11:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.169 ************************************ 00:06:02.169 START TEST accel_crc32c 00:06:02.169 ************************************ 00:06:02.169 01:11:27 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:02.169 01:11:27 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:02.169 [2024-07-16 01:11:27.975208] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:02.169 [2024-07-16 01:11:27.975272] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218673 ] 00:06:02.169 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.169 [2024-07-16 01:11:28.032091] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.169 [2024-07-16 01:11:28.103300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.169 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.169 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.170 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.429 01:11:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:03.365 01:11:29 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.365 00:06:03.365 real 0m1.332s 00:06:03.365 user 0m1.227s 00:06:03.365 sys 0m0.118s 00:06:03.365 01:11:29 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.365 01:11:29 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:03.365 ************************************ 00:06:03.365 END TEST accel_crc32c 00:06:03.365 ************************************ 00:06:03.365 01:11:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.365 01:11:29 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:03.365 01:11:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:03.365 01:11:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.365 01:11:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.365 ************************************ 00:06:03.365 START TEST accel_crc32c_C2 00:06:03.365 ************************************ 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:03.365 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:03.365 [2024-07-16 01:11:29.351140] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:03.365 [2024-07-16 01:11:29.351177] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218925 ] 00:06:03.625 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.625 [2024-07-16 01:11:29.405887] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.625 [2024-07-16 01:11:29.476935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.625 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.626 01:11:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.003 00:06:05.003 real 0m1.323s 00:06:05.003 user 0m1.231s 00:06:05.003 sys 0m0.106s 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.003 01:11:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:05.003 ************************************ 00:06:05.003 END TEST accel_crc32c_C2 00:06:05.003 ************************************ 00:06:05.003 01:11:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.003 01:11:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:05.003 01:11:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:05.003 01:11:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.003 01:11:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.003 ************************************ 00:06:05.003 START TEST accel_copy 00:06:05.003 ************************************ 00:06:05.003 01:11:30 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:05.003 [2024-07-16 01:11:30.745984] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:05.003 [2024-07-16 01:11:30.746029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219173 ] 00:06:05.003 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.003 [2024-07-16 01:11:30.801969] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.003 [2024-07-16 01:11:30.873204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.003 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.004 01:11:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:06.380 01:11:32 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.380 00:06:06.380 real 0m1.332s 00:06:06.380 user 0m1.233s 00:06:06.380 sys 0m0.111s 00:06:06.380 01:11:32 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.380 01:11:32 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:06.380 ************************************ 00:06:06.380 END TEST accel_copy 00:06:06.380 ************************************ 00:06:06.380 01:11:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.380 01:11:32 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.380 01:11:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:06.380 01:11:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.380 01:11:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.380 ************************************ 00:06:06.380 START TEST accel_fill 00:06:06.380 ************************************ 00:06:06.380 01:11:32 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:06.380 [2024-07-16 01:11:32.138686] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:06.380 [2024-07-16 01:11:32.138740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219426 ] 00:06:06.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.380 [2024-07-16 01:11:32.195371] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.380 [2024-07-16 01:11:32.266076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.380 01:11:32 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:06.381 01:11:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:07.754 01:11:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.754 00:06:07.754 real 0m1.329s 00:06:07.754 user 0m1.235s 00:06:07.754 sys 0m0.107s 00:06:07.754 01:11:33 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.754 01:11:33 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:07.754 ************************************ 00:06:07.754 END TEST accel_fill 00:06:07.754 ************************************ 00:06:07.754 01:11:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.754 01:11:33 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:07.754 01:11:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.754 01:11:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.754 01:11:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.754 ************************************ 00:06:07.754 START TEST accel_copy_crc32c 00:06:07.754 ************************************ 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:07.754 [2024-07-16 01:11:33.508204] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:07.754 [2024-07-16 01:11:33.508239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219671 ] 00:06:07.754 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.754 [2024-07-16 01:11:33.561715] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.754 [2024-07-16 01:11:33.632030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.754 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.755 01:11:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.128 00:06:09.128 real 0m1.313s 00:06:09.128 user 0m1.224s 00:06:09.128 sys 0m0.103s 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.128 01:11:34 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:09.128 ************************************ 00:06:09.128 END TEST accel_copy_crc32c 00:06:09.128 ************************************ 00:06:09.128 01:11:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.128 01:11:34 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:09.128 01:11:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:09.128 01:11:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.128 01:11:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.128 ************************************ 00:06:09.128 START TEST accel_copy_crc32c_C2 00:06:09.128 ************************************ 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.128 01:11:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:09.128 [2024-07-16 01:11:34.878795] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:09.128 [2024-07-16 01:11:34.878844] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219920 ] 00:06:09.128 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.128 [2024-07-16 01:11:34.933961] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.128 [2024-07-16 01:11:35.004786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.128 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.128 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.128 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:09.129 01:11:35 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.504 00:06:10.504 real 0m1.322s 00:06:10.504 user 0m1.238s 00:06:10.504 sys 0m0.100s 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.504 01:11:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:10.504 ************************************ 00:06:10.504 END TEST accel_copy_crc32c_C2 00:06:10.504 ************************************ 00:06:10.504 01:11:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.504 01:11:36 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:10.504 01:11:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:10.504 01:11:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.504 01:11:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.504 ************************************ 00:06:10.504 START TEST accel_dualcast 00:06:10.504 ************************************ 00:06:10.504 01:11:36 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:10.504 [2024-07-16 01:11:36.255023] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:10.504 [2024-07-16 01:11:36.255073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3220172 ] 00:06:10.504 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.504 [2024-07-16 01:11:36.309935] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.504 [2024-07-16 01:11:36.380672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.504 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.505 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.505 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.505 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.505 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.505 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:10.505 01:11:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:10.505 01:11:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:10.505 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:10.505 01:11:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:11.885 01:11:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.885 00:06:11.885 real 0m1.320s 00:06:11.885 user 0m1.229s 00:06:11.885 sys 0m0.105s 00:06:11.885 01:11:37 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.885 01:11:37 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:11.885 ************************************ 00:06:11.886 END TEST accel_dualcast 00:06:11.886 ************************************ 00:06:11.886 01:11:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.886 01:11:37 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:11.886 01:11:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:11.886 01:11:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.886 01:11:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.886 ************************************ 00:06:11.886 START TEST accel_compare 00:06:11.886 ************************************ 00:06:11.886 01:11:37 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:11.886 [2024-07-16 01:11:37.630306] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:11.886 [2024-07-16 01:11:37.630346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3220419 ] 00:06:11.886 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.886 [2024-07-16 01:11:37.684494] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.886 [2024-07-16 01:11:37.755347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:11.886 01:11:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:13.264 01:11:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.264 00:06:13.264 real 0m1.320s 00:06:13.264 user 0m1.230s 00:06:13.264 sys 0m0.103s 00:06:13.264 01:11:38 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.264 01:11:38 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:13.264 ************************************ 00:06:13.264 END TEST accel_compare 00:06:13.264 ************************************ 00:06:13.264 01:11:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.264 01:11:38 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:13.264 01:11:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:13.264 01:11:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.264 01:11:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.264 ************************************ 00:06:13.264 START TEST accel_xor 00:06:13.264 ************************************ 00:06:13.264 01:11:38 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:13.264 01:11:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:13.264 [2024-07-16 01:11:38.997213] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:13.264 [2024-07-16 01:11:38.997248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3220668 ] 00:06:13.264 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.264 [2024-07-16 01:11:39.050364] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.264 [2024-07-16 01:11:39.120537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:13.264 01:11:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.642 00:06:14.642 real 0m1.315s 00:06:14.642 user 0m1.227s 00:06:14.642 sys 0m0.102s 00:06:14.642 01:11:40 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.642 01:11:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:14.642 ************************************ 00:06:14.642 END TEST accel_xor 00:06:14.642 ************************************ 00:06:14.642 01:11:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.642 01:11:40 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:14.642 01:11:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:14.642 01:11:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.642 01:11:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.642 ************************************ 00:06:14.642 START TEST accel_xor 00:06:14.642 ************************************ 00:06:14.642 01:11:40 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.642 01:11:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:14.643 [2024-07-16 01:11:40.388627] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:14.643 [2024-07-16 01:11:40.388673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3220924 ] 00:06:14.643 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.643 [2024-07-16 01:11:40.444166] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.643 [2024-07-16 01:11:40.515122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:14.643 01:11:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.019 01:11:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.019 01:11:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.019 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.019 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.019 01:11:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.019 01:11:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.019 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.019 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.019 01:11:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.019 01:11:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:16.020 01:11:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.020 00:06:16.020 real 0m1.331s 00:06:16.020 user 0m1.230s 00:06:16.020 sys 0m0.114s 00:06:16.020 01:11:41 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.020 01:11:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:16.020 ************************************ 00:06:16.020 END TEST accel_xor 00:06:16.020 ************************************ 00:06:16.020 01:11:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.020 01:11:41 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:16.020 01:11:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:16.020 01:11:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.020 01:11:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.020 ************************************ 00:06:16.020 START TEST accel_dif_verify 00:06:16.020 ************************************ 00:06:16.020 01:11:41 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:16.020 [2024-07-16 01:11:41.760180] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:16.020 [2024-07-16 01:11:41.760214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3221170 ] 00:06:16.020 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.020 [2024-07-16 01:11:41.812821] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.020 [2024-07-16 01:11:41.885405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:16.020 01:11:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:17.396 01:11:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.396 00:06:17.396 real 0m1.317s 00:06:17.396 user 0m1.232s 00:06:17.396 sys 0m0.101s 00:06:17.396 01:11:43 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.396 01:11:43 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:17.396 ************************************ 00:06:17.396 END TEST accel_dif_verify 00:06:17.396 ************************************ 00:06:17.396 01:11:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.396 01:11:43 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:17.396 01:11:43 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:17.396 01:11:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.396 01:11:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.396 ************************************ 00:06:17.396 START TEST accel_dif_generate 00:06:17.396 ************************************ 00:06:17.396 01:11:43 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:17.396 [2024-07-16 01:11:43.132556] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:17.396 [2024-07-16 01:11:43.132603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3221417 ] 00:06:17.396 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.396 [2024-07-16 01:11:43.186829] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.396 [2024-07-16 01:11:43.258473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:17.396 01:11:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:18.770 01:11:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.770 00:06:18.770 real 0m1.321s 00:06:18.770 user 0m1.234s 00:06:18.770 sys 0m0.102s 00:06:18.770 01:11:44 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.770 01:11:44 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:18.770 ************************************ 00:06:18.770 END TEST accel_dif_generate 00:06:18.770 ************************************ 00:06:18.770 01:11:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.770 01:11:44 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:18.770 01:11:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:18.770 01:11:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.770 01:11:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.770 ************************************ 00:06:18.770 START TEST accel_dif_generate_copy 00:06:18.770 ************************************ 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:18.770 [2024-07-16 01:11:44.520989] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:18.770 [2024-07-16 01:11:44.521034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3221668 ] 00:06:18.770 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.770 [2024-07-16 01:11:44.576915] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.770 [2024-07-16 01:11:44.649632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:18.770 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:18.771 01:11:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.157 00:06:20.157 real 0m1.332s 00:06:20.157 user 0m1.240s 00:06:20.157 sys 0m0.104s 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.157 01:11:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:20.157 ************************************ 00:06:20.157 END TEST accel_dif_generate_copy 00:06:20.157 ************************************ 00:06:20.157 01:11:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.157 01:11:45 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:20.157 01:11:45 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.157 01:11:45 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:20.157 01:11:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.157 01:11:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.157 ************************************ 00:06:20.157 START TEST accel_comp 00:06:20.157 ************************************ 00:06:20.157 01:11:45 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:20.157 01:11:45 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:20.157 [2024-07-16 01:11:45.899933] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:20.157 [2024-07-16 01:11:45.899969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3221918 ] 00:06:20.157 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.157 [2024-07-16 01:11:45.952269] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.157 [2024-07-16 01:11:46.023156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:20.157 01:11:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:21.594 01:11:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.594 00:06:21.594 real 0m1.317s 00:06:21.594 user 0m1.229s 00:06:21.594 sys 0m0.102s 00:06:21.594 01:11:47 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.594 01:11:47 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:21.594 ************************************ 00:06:21.594 END TEST accel_comp 00:06:21.594 ************************************ 00:06:21.594 01:11:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.594 01:11:47 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:21.594 01:11:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:21.594 01:11:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.594 01:11:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.594 ************************************ 00:06:21.594 START TEST accel_decomp 00:06:21.594 ************************************ 00:06:21.594 01:11:47 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:21.594 [2024-07-16 01:11:47.288891] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:21.594 [2024-07-16 01:11:47.288936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3222166 ] 00:06:21.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.594 [2024-07-16 01:11:47.344418] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.594 [2024-07-16 01:11:47.416837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:21.594 01:11:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:22.966 01:11:48 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.966 00:06:22.966 real 0m1.335s 00:06:22.966 user 0m1.233s 00:06:22.966 sys 0m0.117s 00:06:22.966 01:11:48 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.966 01:11:48 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:22.966 ************************************ 00:06:22.966 END TEST accel_decomp 00:06:22.966 ************************************ 00:06:22.966 01:11:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.966 01:11:48 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:22.966 01:11:48 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:22.966 01:11:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.966 01:11:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.966 ************************************ 00:06:22.966 START TEST accel_decomp_full 00:06:22.966 ************************************ 00:06:22.966 01:11:48 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:22.966 [2024-07-16 01:11:48.683049] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:22.966 [2024-07-16 01:11:48.683093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3222417 ] 00:06:22.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.966 [2024-07-16 01:11:48.738136] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.966 [2024-07-16 01:11:48.809858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.966 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:22.967 01:11:48 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:24.342 01:11:49 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.342 00:06:24.342 real 0m1.341s 00:06:24.342 user 0m1.245s 00:06:24.342 sys 0m0.108s 00:06:24.342 01:11:49 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.342 01:11:49 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:24.342 ************************************ 00:06:24.342 END TEST accel_decomp_full 00:06:24.342 ************************************ 00:06:24.342 01:11:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.342 01:11:50 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:24.342 01:11:50 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:24.342 01:11:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.342 01:11:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.342 ************************************ 00:06:24.342 START TEST accel_decomp_mcore 00:06:24.342 ************************************ 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:24.342 [2024-07-16 01:11:50.088823] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:24.342 [2024-07-16 01:11:50.088871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3222667 ] 00:06:24.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.342 [2024-07-16 01:11:50.145322] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.342 [2024-07-16 01:11:50.220152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.342 [2024-07-16 01:11:50.220246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.342 [2024-07-16 01:11:50.220347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.342 [2024-07-16 01:11:50.220361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.342 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:24.343 01:11:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.719 00:06:25.719 real 0m1.348s 00:06:25.719 user 0m4.561s 00:06:25.719 sys 0m0.127s 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.719 01:11:51 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:25.719 ************************************ 00:06:25.719 END TEST accel_decomp_mcore 00:06:25.719 ************************************ 00:06:25.719 01:11:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.719 01:11:51 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:25.719 01:11:51 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:25.719 01:11:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.719 01:11:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.719 ************************************ 00:06:25.719 START TEST accel_decomp_full_mcore 00:06:25.719 ************************************ 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.719 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:25.720 [2024-07-16 01:11:51.501452] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:25.720 [2024-07-16 01:11:51.501518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3222925 ] 00:06:25.720 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.720 [2024-07-16 01:11:51.558995] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.720 [2024-07-16 01:11:51.633594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.720 [2024-07-16 01:11:51.633692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.720 [2024-07-16 01:11:51.633783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.720 [2024-07-16 01:11:51.633784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:25.720 01:11:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.096 00:06:27.096 real 0m1.360s 00:06:27.096 user 0m4.615s 00:06:27.096 sys 0m0.119s 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.096 01:11:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:27.096 ************************************ 00:06:27.096 END TEST accel_decomp_full_mcore 00:06:27.096 ************************************ 00:06:27.096 01:11:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.096 01:11:52 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:27.096 01:11:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:27.096 01:11:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.096 01:11:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.096 ************************************ 00:06:27.096 START TEST accel_decomp_mthread 00:06:27.096 ************************************ 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:27.097 01:11:52 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:27.097 [2024-07-16 01:11:52.913376] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:27.097 [2024-07-16 01:11:52.913411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3223178 ] 00:06:27.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.097 [2024-07-16 01:11:52.966752] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.097 [2024-07-16 01:11:53.038198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.097 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.097 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.097 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.097 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.097 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.097 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.097 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.097 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.357 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.358 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:27.358 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:27.358 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:27.358 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:27.358 01:11:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.296 00:06:28.296 real 0m1.321s 00:06:28.296 user 0m1.229s 00:06:28.296 sys 0m0.106s 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.296 01:11:54 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:28.296 ************************************ 00:06:28.296 END TEST accel_decomp_mthread 00:06:28.296 ************************************ 00:06:28.296 01:11:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.297 01:11:54 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:28.297 01:11:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:28.297 01:11:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.297 01:11:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.557 ************************************ 00:06:28.557 START TEST accel_decomp_full_mthread 00:06:28.557 ************************************ 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:28.557 [2024-07-16 01:11:54.297513] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:28.557 [2024-07-16 01:11:54.297561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3223426 ] 00:06:28.557 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.557 [2024-07-16 01:11:54.351826] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.557 [2024-07-16 01:11:54.423474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:28.557 01:11:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.933 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.934 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.934 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.934 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:29.934 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.934 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.934 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.934 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.934 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.934 01:11:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.934 00:06:29.934 real 0m1.346s 00:06:29.934 user 0m1.252s 00:06:29.934 sys 0m0.107s 00:06:29.934 01:11:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.934 01:11:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:29.934 ************************************ 00:06:29.934 END TEST accel_decomp_full_mthread 00:06:29.934 ************************************ 00:06:29.934 01:11:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.934 01:11:55 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:29.934 01:11:55 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:29.934 01:11:55 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:29.934 01:11:55 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:29.934 01:11:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.934 01:11:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.934 01:11:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.934 01:11:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.934 01:11:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.934 01:11:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.934 01:11:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.934 01:11:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:29.934 01:11:55 accel -- accel/accel.sh@41 -- # jq -r . 00:06:29.934 ************************************ 00:06:29.934 START TEST accel_dif_functional_tests 00:06:29.934 ************************************ 00:06:29.934 01:11:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:29.934 [2024-07-16 01:11:55.727906] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:29.934 [2024-07-16 01:11:55.727942] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3223687 ] 00:06:29.934 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.934 [2024-07-16 01:11:55.780792] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.934 [2024-07-16 01:11:55.854077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.934 [2024-07-16 01:11:55.854173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.934 [2024-07-16 01:11:55.854175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.193 00:06:30.193 00:06:30.193 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.193 http://cunit.sourceforge.net/ 00:06:30.193 00:06:30.193 00:06:30.193 Suite: accel_dif 00:06:30.193 Test: verify: DIF generated, GUARD check ...passed 00:06:30.193 Test: verify: DIF generated, APPTAG check ...passed 00:06:30.193 Test: verify: DIF generated, REFTAG check ...passed 00:06:30.193 Test: verify: DIF not generated, GUARD check ...[2024-07-16 01:11:55.922771] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:30.193 passed 00:06:30.193 Test: verify: DIF not generated, APPTAG check ...[2024-07-16 01:11:55.922814] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:30.193 passed 00:06:30.193 Test: verify: DIF not generated, REFTAG check ...[2024-07-16 01:11:55.922832] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:30.193 passed 00:06:30.193 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:30.193 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-16 01:11:55.922875] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:30.193 passed 00:06:30.193 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:30.193 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:30.193 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:30.193 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-16 01:11:55.922968] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:30.193 passed 00:06:30.193 Test: verify copy: DIF generated, GUARD check ...passed 00:06:30.193 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:30.193 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:30.193 Test: verify copy: DIF not generated, GUARD check ...[2024-07-16 01:11:55.923074] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:30.193 passed 00:06:30.193 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-16 01:11:55.923096] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:30.193 passed 00:06:30.193 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-16 01:11:55.923115] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:30.193 passed 00:06:30.193 Test: generate copy: DIF generated, GUARD check ...passed 00:06:30.193 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:30.193 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:30.193 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:30.193 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:30.193 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:30.193 Test: generate copy: iovecs-len validate ...[2024-07-16 01:11:55.923277] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:30.193 passed 00:06:30.193 Test: generate copy: buffer alignment validate ...passed 00:06:30.193 00:06:30.193 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.193 suites 1 1 n/a 0 0 00:06:30.193 tests 26 26 26 0 0 00:06:30.193 asserts 115 115 115 0 n/a 00:06:30.193 00:06:30.193 Elapsed time = 0.000 seconds 00:06:30.193 00:06:30.193 real 0m0.395s 00:06:30.193 user 0m0.616s 00:06:30.193 sys 0m0.134s 00:06:30.193 01:11:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.193 01:11:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:30.193 ************************************ 00:06:30.193 END TEST accel_dif_functional_tests 00:06:30.193 ************************************ 00:06:30.193 01:11:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:30.193 00:06:30.193 real 0m30.597s 00:06:30.193 user 0m34.662s 00:06:30.193 sys 0m3.975s 00:06:30.193 01:11:56 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.193 01:11:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.193 ************************************ 00:06:30.193 END TEST accel 00:06:30.193 ************************************ 00:06:30.193 01:11:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:30.193 01:11:56 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:30.193 01:11:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.193 01:11:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.193 01:11:56 -- common/autotest_common.sh@10 -- # set +x 00:06:30.452 ************************************ 00:06:30.452 START TEST accel_rpc 00:06:30.452 ************************************ 00:06:30.452 01:11:56 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:30.452 * Looking for test storage... 00:06:30.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:30.452 01:11:56 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:30.452 01:11:56 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3223951 00:06:30.452 01:11:56 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3223951 00:06:30.452 01:11:56 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:30.452 01:11:56 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3223951 ']' 00:06:30.452 01:11:56 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.452 01:11:56 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.452 01:11:56 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.452 01:11:56 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.452 01:11:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.452 [2024-07-16 01:11:56.320742] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:30.452 [2024-07-16 01:11:56.320790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3223951 ] 00:06:30.452 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.452 [2024-07-16 01:11:56.375200] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.710 [2024-07-16 01:11:56.449514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.278 01:11:57 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.278 01:11:57 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:31.278 01:11:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:31.278 01:11:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:31.278 01:11:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:31.278 01:11:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:31.278 01:11:57 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:31.278 01:11:57 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.278 01:11:57 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.278 01:11:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.278 ************************************ 00:06:31.278 START TEST accel_assign_opcode 00:06:31.278 ************************************ 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:31.278 [2024-07-16 01:11:57.131550] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:31.278 [2024-07-16 01:11:57.139556] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.278 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:31.537 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.537 01:11:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:31.537 01:11:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:31.537 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.537 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:31.537 01:11:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:31.537 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.537 software 00:06:31.537 00:06:31.537 real 0m0.224s 00:06:31.537 user 0m0.029s 00:06:31.537 sys 0m0.007s 00:06:31.537 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.537 01:11:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:31.537 ************************************ 00:06:31.537 END TEST accel_assign_opcode 00:06:31.537 ************************************ 00:06:31.537 01:11:57 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:31.537 01:11:57 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3223951 00:06:31.537 01:11:57 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3223951 ']' 00:06:31.537 01:11:57 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3223951 00:06:31.537 01:11:57 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:31.537 01:11:57 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.537 01:11:57 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3223951 00:06:31.537 01:11:57 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.537 01:11:57 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.537 01:11:57 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3223951' 00:06:31.537 killing process with pid 3223951 00:06:31.537 01:11:57 accel_rpc -- common/autotest_common.sh@967 -- # kill 3223951 00:06:31.537 01:11:57 accel_rpc -- common/autotest_common.sh@972 -- # wait 3223951 00:06:31.796 00:06:31.796 real 0m1.536s 00:06:31.796 user 0m1.589s 00:06:31.796 sys 0m0.387s 00:06:31.796 01:11:57 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.796 01:11:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.796 ************************************ 00:06:31.796 END TEST accel_rpc 00:06:31.796 ************************************ 00:06:31.796 01:11:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.796 01:11:57 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:31.796 01:11:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.796 01:11:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.796 01:11:57 -- common/autotest_common.sh@10 -- # set +x 00:06:32.056 ************************************ 00:06:32.056 START TEST app_cmdline 00:06:32.056 ************************************ 00:06:32.056 01:11:57 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:32.056 * Looking for test storage... 00:06:32.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:32.056 01:11:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:32.056 01:11:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3224266 00:06:32.056 01:11:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3224266 00:06:32.056 01:11:57 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:32.056 01:11:57 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3224266 ']' 00:06:32.056 01:11:57 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.056 01:11:57 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.056 01:11:57 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.056 01:11:57 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.056 01:11:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.056 [2024-07-16 01:11:57.923833] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:32.056 [2024-07-16 01:11:57.923886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3224266 ] 00:06:32.056 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.056 [2024-07-16 01:11:57.978449] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.315 [2024-07-16 01:11:58.059274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.884 01:11:58 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.884 01:11:58 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:32.884 01:11:58 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:32.884 { 00:06:32.884 "version": "SPDK v24.09-pre git sha1 315cf04b6", 00:06:32.884 "fields": { 00:06:32.884 "major": 24, 00:06:32.884 "minor": 9, 00:06:32.884 "patch": 0, 00:06:32.884 "suffix": "-pre", 00:06:32.884 "commit": "315cf04b6" 00:06:32.884 } 00:06:32.884 } 00:06:32.884 01:11:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:32.884 01:11:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:32.884 01:11:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:32.884 01:11:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:33.144 01:11:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:33.144 01:11:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:33.144 01:11:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.144 01:11:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:33.144 01:11:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:33.144 01:11:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:33.144 01:11:58 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:33.144 request: 00:06:33.144 { 00:06:33.144 "method": "env_dpdk_get_mem_stats", 00:06:33.144 "req_id": 1 00:06:33.144 } 00:06:33.144 Got JSON-RPC error response 00:06:33.144 response: 00:06:33.144 { 00:06:33.144 "code": -32601, 00:06:33.144 "message": "Method not found" 00:06:33.144 } 00:06:33.144 01:11:59 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:33.144 01:11:59 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.144 01:11:59 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:33.144 01:11:59 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.144 01:11:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3224266 00:06:33.144 01:11:59 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3224266 ']' 00:06:33.144 01:11:59 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3224266 00:06:33.144 01:11:59 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:33.144 01:11:59 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.144 01:11:59 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3224266 00:06:33.403 01:11:59 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.403 01:11:59 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.403 01:11:59 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3224266' 00:06:33.403 killing process with pid 3224266 00:06:33.403 01:11:59 app_cmdline -- common/autotest_common.sh@967 -- # kill 3224266 00:06:33.403 01:11:59 app_cmdline -- common/autotest_common.sh@972 -- # wait 3224266 00:06:33.662 00:06:33.662 real 0m1.637s 00:06:33.662 user 0m1.975s 00:06:33.662 sys 0m0.393s 00:06:33.662 01:11:59 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.663 01:11:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:33.663 ************************************ 00:06:33.663 END TEST app_cmdline 00:06:33.663 ************************************ 00:06:33.663 01:11:59 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.663 01:11:59 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:33.663 01:11:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.663 01:11:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.663 01:11:59 -- common/autotest_common.sh@10 -- # set +x 00:06:33.663 ************************************ 00:06:33.663 START TEST version 00:06:33.663 ************************************ 00:06:33.663 01:11:59 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:33.663 * Looking for test storage... 00:06:33.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:33.663 01:11:59 version -- app/version.sh@17 -- # get_header_version major 00:06:33.663 01:11:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:33.663 01:11:59 version -- app/version.sh@14 -- # cut -f2 00:06:33.663 01:11:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:33.663 01:11:59 version -- app/version.sh@17 -- # major=24 00:06:33.663 01:11:59 version -- app/version.sh@18 -- # get_header_version minor 00:06:33.663 01:11:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:33.663 01:11:59 version -- app/version.sh@14 -- # cut -f2 00:06:33.663 01:11:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:33.663 01:11:59 version -- app/version.sh@18 -- # minor=9 00:06:33.663 01:11:59 version -- app/version.sh@19 -- # get_header_version patch 00:06:33.663 01:11:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:33.663 01:11:59 version -- app/version.sh@14 -- # cut -f2 00:06:33.663 01:11:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:33.663 01:11:59 version -- app/version.sh@19 -- # patch=0 00:06:33.663 01:11:59 version -- app/version.sh@20 -- # get_header_version suffix 00:06:33.663 01:11:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:33.663 01:11:59 version -- app/version.sh@14 -- # cut -f2 00:06:33.663 01:11:59 version -- app/version.sh@14 -- # tr -d '"' 00:06:33.663 01:11:59 version -- app/version.sh@20 -- # suffix=-pre 00:06:33.663 01:11:59 version -- app/version.sh@22 -- # version=24.9 00:06:33.663 01:11:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:33.663 01:11:59 version -- app/version.sh@28 -- # version=24.9rc0 00:06:33.663 01:11:59 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:33.663 01:11:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:33.663 01:11:59 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:33.663 01:11:59 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:33.663 00:06:33.663 real 0m0.149s 00:06:33.663 user 0m0.084s 00:06:33.663 sys 0m0.099s 00:06:33.663 01:11:59 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.663 01:11:59 version -- common/autotest_common.sh@10 -- # set +x 00:06:33.922 ************************************ 00:06:33.922 END TEST version 00:06:33.922 ************************************ 00:06:33.922 01:11:59 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.922 01:11:59 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:33.922 01:11:59 -- spdk/autotest.sh@198 -- # uname -s 00:06:33.922 01:11:59 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:33.922 01:11:59 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:33.922 01:11:59 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:33.922 01:11:59 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:33.922 01:11:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:33.922 01:11:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:33.922 01:11:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:33.922 01:11:59 -- common/autotest_common.sh@10 -- # set +x 00:06:33.922 01:11:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:33.922 01:11:59 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:33.922 01:11:59 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:33.922 01:11:59 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:33.922 01:11:59 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:33.922 01:11:59 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:33.923 01:11:59 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:33.923 01:11:59 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:33.923 01:11:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.923 01:11:59 -- common/autotest_common.sh@10 -- # set +x 00:06:33.923 ************************************ 00:06:33.923 START TEST nvmf_tcp 00:06:33.923 ************************************ 00:06:33.923 01:11:59 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:33.923 * Looking for test storage... 00:06:33.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.923 01:11:59 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.923 01:11:59 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.923 01:11:59 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.923 01:11:59 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.923 01:11:59 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.923 01:11:59 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.923 01:11:59 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:33.923 01:11:59 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:33.923 01:11:59 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:33.923 01:11:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:33.923 01:11:59 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:33.923 01:11:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:33.923 01:11:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.923 01:11:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.923 ************************************ 00:06:33.923 START TEST nvmf_example 00:06:33.923 ************************************ 00:06:33.923 01:11:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:34.189 * Looking for test storage... 00:06:34.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:34.189 01:11:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:34.189 01:12:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:39.457 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:39.457 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:39.457 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:39.458 Found net devices under 0000:86:00.0: cvl_0_0 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:39.458 Found net devices under 0000:86:00.1: cvl_0_1 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:39.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:06:39.458 00:06:39.458 --- 10.0.0.2 ping statistics --- 00:06:39.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.458 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:39.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:06:39.458 00:06:39.458 --- 10.0.0.1 ping statistics --- 00:06:39.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.458 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3227833 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3227833 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3227833 ']' 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.458 01:12:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.458 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:40.394 01:12:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:40.394 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.595 Initializing NVMe Controllers 00:06:52.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:52.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:52.595 Initialization complete. Launching workers. 00:06:52.595 ======================================================== 00:06:52.595 Latency(us) 00:06:52.595 Device Information : IOPS MiB/s Average min max 00:06:52.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18563.29 72.51 3444.98 651.56 18269.43 00:06:52.595 ======================================================== 00:06:52.595 Total : 18563.29 72.51 3444.98 651.56 18269.43 00:06:52.595 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:52.595 rmmod nvme_tcp 00:06:52.595 rmmod nvme_fabrics 00:06:52.595 rmmod nvme_keyring 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3227833 ']' 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3227833 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3227833 ']' 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3227833 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3227833 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3227833' 00:06:52.595 killing process with pid 3227833 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3227833 00:06:52.595 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3227833 00:06:52.595 nvmf threads initialize successfully 00:06:52.595 bdev subsystem init successfully 00:06:52.595 created a nvmf target service 00:06:52.595 create targets's poll groups done 00:06:52.596 all subsystems of target started 00:06:52.596 nvmf target is running 00:06:52.596 all subsystems of target stopped 00:06:52.596 destroy targets's poll groups done 00:06:52.596 destroyed the nvmf target service 00:06:52.596 bdev subsystem finish successfully 00:06:52.596 nvmf threads destroy successfully 00:06:52.596 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:52.596 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:52.596 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:52.596 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:52.596 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:52.596 01:12:16 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.596 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.596 01:12:16 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.854 01:12:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:52.854 01:12:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:52.854 01:12:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:52.854 01:12:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.115 00:06:53.115 real 0m18.959s 00:06:53.115 user 0m45.387s 00:06:53.115 sys 0m5.446s 00:06:53.115 01:12:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.115 01:12:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.115 ************************************ 00:06:53.115 END TEST nvmf_example 00:06:53.115 ************************************ 00:06:53.115 01:12:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:53.115 01:12:18 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:53.115 01:12:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:53.115 01:12:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.115 01:12:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.115 ************************************ 00:06:53.115 START TEST nvmf_filesystem 00:06:53.115 ************************************ 00:06:53.115 01:12:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:53.115 * Looking for test storage... 00:06:53.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:53.115 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:53.116 #define SPDK_CONFIG_H 00:06:53.116 #define SPDK_CONFIG_APPS 1 00:06:53.116 #define SPDK_CONFIG_ARCH native 00:06:53.116 #undef SPDK_CONFIG_ASAN 00:06:53.116 #undef SPDK_CONFIG_AVAHI 00:06:53.116 #undef SPDK_CONFIG_CET 00:06:53.116 #define SPDK_CONFIG_COVERAGE 1 00:06:53.116 #define SPDK_CONFIG_CROSS_PREFIX 00:06:53.116 #undef SPDK_CONFIG_CRYPTO 00:06:53.116 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:53.116 #undef SPDK_CONFIG_CUSTOMOCF 00:06:53.116 #undef SPDK_CONFIG_DAOS 00:06:53.116 #define SPDK_CONFIG_DAOS_DIR 00:06:53.116 #define SPDK_CONFIG_DEBUG 1 00:06:53.116 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:53.116 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:53.116 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:53.116 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:53.116 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:53.116 #undef SPDK_CONFIG_DPDK_UADK 00:06:53.116 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:53.116 #define SPDK_CONFIG_EXAMPLES 1 00:06:53.116 #undef SPDK_CONFIG_FC 00:06:53.116 #define SPDK_CONFIG_FC_PATH 00:06:53.116 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:53.116 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:53.116 #undef SPDK_CONFIG_FUSE 00:06:53.116 #undef SPDK_CONFIG_FUZZER 00:06:53.116 #define SPDK_CONFIG_FUZZER_LIB 00:06:53.116 #undef SPDK_CONFIG_GOLANG 00:06:53.116 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:53.116 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:53.116 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:53.116 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:53.116 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:53.116 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:53.116 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:53.116 #define SPDK_CONFIG_IDXD 1 00:06:53.116 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:53.116 #undef SPDK_CONFIG_IPSEC_MB 00:06:53.116 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:53.116 #define SPDK_CONFIG_ISAL 1 00:06:53.116 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:53.116 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:53.116 #define SPDK_CONFIG_LIBDIR 00:06:53.116 #undef SPDK_CONFIG_LTO 00:06:53.116 #define SPDK_CONFIG_MAX_LCORES 128 00:06:53.116 #define SPDK_CONFIG_NVME_CUSE 1 00:06:53.116 #undef SPDK_CONFIG_OCF 00:06:53.116 #define SPDK_CONFIG_OCF_PATH 00:06:53.116 #define SPDK_CONFIG_OPENSSL_PATH 00:06:53.116 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:53.116 #define SPDK_CONFIG_PGO_DIR 00:06:53.116 #undef SPDK_CONFIG_PGO_USE 00:06:53.116 #define SPDK_CONFIG_PREFIX /usr/local 00:06:53.116 #undef SPDK_CONFIG_RAID5F 00:06:53.116 #undef SPDK_CONFIG_RBD 00:06:53.116 #define SPDK_CONFIG_RDMA 1 00:06:53.116 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:53.116 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:53.116 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:53.116 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:53.116 #define SPDK_CONFIG_SHARED 1 00:06:53.116 #undef SPDK_CONFIG_SMA 00:06:53.116 #define SPDK_CONFIG_TESTS 1 00:06:53.116 #undef SPDK_CONFIG_TSAN 00:06:53.116 #define SPDK_CONFIG_UBLK 1 00:06:53.116 #define SPDK_CONFIG_UBSAN 1 00:06:53.116 #undef SPDK_CONFIG_UNIT_TESTS 00:06:53.116 #undef SPDK_CONFIG_URING 00:06:53.116 #define SPDK_CONFIG_URING_PATH 00:06:53.116 #undef SPDK_CONFIG_URING_ZNS 00:06:53.116 #undef SPDK_CONFIG_USDT 00:06:53.116 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:53.116 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:53.116 #define SPDK_CONFIG_VFIO_USER 1 00:06:53.116 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:53.116 #define SPDK_CONFIG_VHOST 1 00:06:53.116 #define SPDK_CONFIG_VIRTIO 1 00:06:53.116 #undef SPDK_CONFIG_VTUNE 00:06:53.116 #define SPDK_CONFIG_VTUNE_DIR 00:06:53.116 #define SPDK_CONFIG_WERROR 1 00:06:53.116 #define SPDK_CONFIG_WPDK_DIR 00:06:53.116 #undef SPDK_CONFIG_XNVME 00:06:53.116 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:53.116 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:53.117 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3230581 ]] 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3230581 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.7Qh4Rt 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7Qh4Rt/tests/target /tmp/spdk.7Qh4Rt 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:53.118 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953421824 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4331008000 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=190111186944 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974328320 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5863141376 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:53.377 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983787008 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987162112 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185489920 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194865664 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986347008 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987166208 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=819200 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597426688 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597430784 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:53.378 * Looking for test storage... 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=190111186944 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8077733888 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:53.378 01:12:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:58.647 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:58.647 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:58.647 Found net devices under 0000:86:00.0: cvl_0_0 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:58.647 Found net devices under 0000:86:00.1: cvl_0_1 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:58.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:06:58.647 00:06:58.647 --- 10.0.0.2 ping statistics --- 00:06:58.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.647 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:06:58.647 00:06:58.647 --- 10.0.0.1 ping statistics --- 00:06:58.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.647 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.647 ************************************ 00:06:58.647 START TEST nvmf_filesystem_no_in_capsule 00:06:58.647 ************************************ 00:06:58.647 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3233601 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3233601 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3233601 ']' 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.648 01:12:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:58.648 [2024-07-16 01:12:24.496318] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:06:58.648 [2024-07-16 01:12:24.496366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.648 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.648 [2024-07-16 01:12:24.555862] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.905 [2024-07-16 01:12:24.641563] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.905 [2024-07-16 01:12:24.641598] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.905 [2024-07-16 01:12:24.641605] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.905 [2024-07-16 01:12:24.641611] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.905 [2024-07-16 01:12:24.641616] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.905 [2024-07-16 01:12:24.641657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.905 [2024-07-16 01:12:24.641775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.905 [2024-07-16 01:12:24.641852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.905 [2024-07-16 01:12:24.641853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.471 [2024-07-16 01:12:25.335345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.471 Malloc1 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.471 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.730 [2024-07-16 01:12:25.478004] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:59.730 { 00:06:59.730 "name": "Malloc1", 00:06:59.730 "aliases": [ 00:06:59.730 "b32466db-af22-4748-bae3-f6568b907ea3" 00:06:59.730 ], 00:06:59.730 "product_name": "Malloc disk", 00:06:59.730 "block_size": 512, 00:06:59.730 "num_blocks": 1048576, 00:06:59.730 "uuid": "b32466db-af22-4748-bae3-f6568b907ea3", 00:06:59.730 "assigned_rate_limits": { 00:06:59.730 "rw_ios_per_sec": 0, 00:06:59.730 "rw_mbytes_per_sec": 0, 00:06:59.730 "r_mbytes_per_sec": 0, 00:06:59.730 "w_mbytes_per_sec": 0 00:06:59.730 }, 00:06:59.730 "claimed": true, 00:06:59.730 "claim_type": "exclusive_write", 00:06:59.730 "zoned": false, 00:06:59.730 "supported_io_types": { 00:06:59.730 "read": true, 00:06:59.730 "write": true, 00:06:59.730 "unmap": true, 00:06:59.730 "flush": true, 00:06:59.730 "reset": true, 00:06:59.730 "nvme_admin": false, 00:06:59.730 "nvme_io": false, 00:06:59.730 "nvme_io_md": false, 00:06:59.730 "write_zeroes": true, 00:06:59.730 "zcopy": true, 00:06:59.730 "get_zone_info": false, 00:06:59.730 "zone_management": false, 00:06:59.730 "zone_append": false, 00:06:59.730 "compare": false, 00:06:59.730 "compare_and_write": false, 00:06:59.730 "abort": true, 00:06:59.730 "seek_hole": false, 00:06:59.730 "seek_data": false, 00:06:59.730 "copy": true, 00:06:59.730 "nvme_iov_md": false 00:06:59.730 }, 00:06:59.730 "memory_domains": [ 00:06:59.730 { 00:06:59.730 "dma_device_id": "system", 00:06:59.730 "dma_device_type": 1 00:06:59.730 }, 00:06:59.730 { 00:06:59.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.730 "dma_device_type": 2 00:06:59.730 } 00:06:59.730 ], 00:06:59.730 "driver_specific": {} 00:06:59.730 } 00:06:59.730 ]' 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:59.730 01:12:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:00.732 01:12:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:00.732 01:12:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:00.732 01:12:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:00.732 01:12:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:00.732 01:12:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:03.264 01:12:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:03.523 01:12:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.455 ************************************ 00:07:04.455 START TEST filesystem_ext4 00:07:04.455 ************************************ 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:04.455 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:04.456 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:04.456 mke2fs 1.46.5 (30-Dec-2021) 00:07:04.456 Discarding device blocks: 0/522240 done 00:07:04.456 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:04.456 Filesystem UUID: efeb3ec3-e337-4cdf-99ed-6bdc0ec34ba9 00:07:04.456 Superblock backups stored on blocks: 00:07:04.456 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:04.456 00:07:04.456 Allocating group tables: 0/64 done 00:07:04.456 Writing inode tables: 0/64 done 00:07:04.714 Creating journal (8192 blocks): done 00:07:04.714 Writing superblocks and filesystem accounting information: 0/64 done 00:07:04.714 00:07:04.714 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:04.714 01:12:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3233601 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:05.649 00:07:05.649 real 0m1.145s 00:07:05.649 user 0m0.028s 00:07:05.649 sys 0m0.063s 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:05.649 ************************************ 00:07:05.649 END TEST filesystem_ext4 00:07:05.649 ************************************ 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:05.649 ************************************ 00:07:05.649 START TEST filesystem_btrfs 00:07:05.649 ************************************ 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:05.649 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:05.907 btrfs-progs v6.6.2 00:07:05.907 See https://btrfs.readthedocs.io for more information. 00:07:05.907 00:07:05.907 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:05.907 NOTE: several default settings have changed in version 5.15, please make sure 00:07:05.907 this does not affect your deployments: 00:07:05.907 - DUP for metadata (-m dup) 00:07:05.907 - enabled no-holes (-O no-holes) 00:07:05.907 - enabled free-space-tree (-R free-space-tree) 00:07:05.907 00:07:05.907 Label: (null) 00:07:05.907 UUID: 0537b719-e942-4658-bbaa-5b7870b5df2a 00:07:05.907 Node size: 16384 00:07:05.907 Sector size: 4096 00:07:05.907 Filesystem size: 510.00MiB 00:07:05.907 Block group profiles: 00:07:05.907 Data: single 8.00MiB 00:07:05.907 Metadata: DUP 32.00MiB 00:07:05.907 System: DUP 8.00MiB 00:07:05.907 SSD detected: yes 00:07:05.907 Zoned device: no 00:07:05.907 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:05.907 Runtime features: free-space-tree 00:07:05.907 Checksum: crc32c 00:07:05.907 Number of devices: 1 00:07:05.907 Devices: 00:07:05.907 ID SIZE PATH 00:07:05.907 1 510.00MiB /dev/nvme0n1p1 00:07:05.907 00:07:05.907 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:05.907 01:12:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3233601 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:06.842 00:07:06.842 real 0m1.307s 00:07:06.842 user 0m0.027s 00:07:06.842 sys 0m0.124s 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.842 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:06.842 ************************************ 00:07:06.842 END TEST filesystem_btrfs 00:07:06.842 ************************************ 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:07.101 ************************************ 00:07:07.101 START TEST filesystem_xfs 00:07:07.101 ************************************ 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:07.101 01:12:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:07.101 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:07.101 = sectsz=512 attr=2, projid32bit=1 00:07:07.101 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:07.101 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:07.101 data = bsize=4096 blocks=130560, imaxpct=25 00:07:07.101 = sunit=0 swidth=0 blks 00:07:07.101 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:07.101 log =internal log bsize=4096 blocks=16384, version=2 00:07:07.101 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:07.101 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:08.037 Discarding blocks...Done. 00:07:08.037 01:12:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:08.037 01:12:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3233601 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:09.936 00:07:09.936 real 0m2.724s 00:07:09.936 user 0m0.022s 00:07:09.936 sys 0m0.072s 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:09.936 ************************************ 00:07:09.936 END TEST filesystem_xfs 00:07:09.936 ************************************ 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:09.936 01:12:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3233601 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3233601 ']' 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3233601 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3233601 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3233601' 00:07:10.195 killing process with pid 3233601 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3233601 00:07:10.195 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3233601 00:07:10.453 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:10.453 00:07:10.453 real 0m11.996s 00:07:10.453 user 0m47.118s 00:07:10.453 sys 0m1.170s 00:07:10.453 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.453 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:10.453 ************************************ 00:07:10.453 END TEST nvmf_filesystem_no_in_capsule 00:07:10.453 ************************************ 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.710 ************************************ 00:07:10.710 START TEST nvmf_filesystem_in_capsule 00:07:10.710 ************************************ 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3235896 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3235896 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3235896 ']' 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.710 01:12:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:10.710 [2024-07-16 01:12:36.566280] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:07:10.710 [2024-07-16 01:12:36.566320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.710 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.710 [2024-07-16 01:12:36.624897] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.968 [2024-07-16 01:12:36.700231] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.968 [2024-07-16 01:12:36.700276] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.968 [2024-07-16 01:12:36.700282] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.968 [2024-07-16 01:12:36.700288] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.968 [2024-07-16 01:12:36.700293] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.968 [2024-07-16 01:12:36.700345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.968 [2024-07-16 01:12:36.700360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.968 [2024-07-16 01:12:36.700433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.968 [2024-07-16 01:12:36.700434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.535 [2024-07-16 01:12:37.425254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.535 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.793 Malloc1 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.793 [2024-07-16 01:12:37.563280] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:11.793 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:11.794 { 00:07:11.794 "name": "Malloc1", 00:07:11.794 "aliases": [ 00:07:11.794 "21aa9862-18ed-4035-8e65-8251673ab087" 00:07:11.794 ], 00:07:11.794 "product_name": "Malloc disk", 00:07:11.794 "block_size": 512, 00:07:11.794 "num_blocks": 1048576, 00:07:11.794 "uuid": "21aa9862-18ed-4035-8e65-8251673ab087", 00:07:11.794 "assigned_rate_limits": { 00:07:11.794 "rw_ios_per_sec": 0, 00:07:11.794 "rw_mbytes_per_sec": 0, 00:07:11.794 "r_mbytes_per_sec": 0, 00:07:11.794 "w_mbytes_per_sec": 0 00:07:11.794 }, 00:07:11.794 "claimed": true, 00:07:11.794 "claim_type": "exclusive_write", 00:07:11.794 "zoned": false, 00:07:11.794 "supported_io_types": { 00:07:11.794 "read": true, 00:07:11.794 "write": true, 00:07:11.794 "unmap": true, 00:07:11.794 "flush": true, 00:07:11.794 "reset": true, 00:07:11.794 "nvme_admin": false, 00:07:11.794 "nvme_io": false, 00:07:11.794 "nvme_io_md": false, 00:07:11.794 "write_zeroes": true, 00:07:11.794 "zcopy": true, 00:07:11.794 "get_zone_info": false, 00:07:11.794 "zone_management": false, 00:07:11.794 "zone_append": false, 00:07:11.794 "compare": false, 00:07:11.794 "compare_and_write": false, 00:07:11.794 "abort": true, 00:07:11.794 "seek_hole": false, 00:07:11.794 "seek_data": false, 00:07:11.794 "copy": true, 00:07:11.794 "nvme_iov_md": false 00:07:11.794 }, 00:07:11.794 "memory_domains": [ 00:07:11.794 { 00:07:11.794 "dma_device_id": "system", 00:07:11.794 "dma_device_type": 1 00:07:11.794 }, 00:07:11.794 { 00:07:11.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.794 "dma_device_type": 2 00:07:11.794 } 00:07:11.794 ], 00:07:11.794 "driver_specific": {} 00:07:11.794 } 00:07:11.794 ]' 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:11.794 01:12:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.167 01:12:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.167 01:12:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:13.167 01:12:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.167 01:12:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:13.167 01:12:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:15.067 01:12:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:15.325 01:12:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.257 ************************************ 00:07:16.257 START TEST filesystem_in_capsule_ext4 00:07:16.257 ************************************ 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:16.257 01:12:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:16.257 mke2fs 1.46.5 (30-Dec-2021) 00:07:16.515 Discarding device blocks: 0/522240 done 00:07:16.515 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:16.515 Filesystem UUID: ac8d30e1-047a-4f26-a49a-2b83733a5881 00:07:16.515 Superblock backups stored on blocks: 00:07:16.515 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:16.515 00:07:16.515 Allocating group tables: 0/64 done 00:07:16.515 Writing inode tables: 0/64 done 00:07:17.030 Creating journal (8192 blocks): done 00:07:17.852 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:17.852 00:07:17.852 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:17.852 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3235896 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:18.111 00:07:18.111 real 0m1.755s 00:07:18.111 user 0m0.025s 00:07:18.111 sys 0m0.064s 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.111 01:12:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:18.111 ************************************ 00:07:18.111 END TEST filesystem_in_capsule_ext4 00:07:18.111 ************************************ 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.111 ************************************ 00:07:18.111 START TEST filesystem_in_capsule_btrfs 00:07:18.111 ************************************ 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:18.111 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:18.369 btrfs-progs v6.6.2 00:07:18.369 See https://btrfs.readthedocs.io for more information. 00:07:18.369 00:07:18.369 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:18.369 NOTE: several default settings have changed in version 5.15, please make sure 00:07:18.369 this does not affect your deployments: 00:07:18.369 - DUP for metadata (-m dup) 00:07:18.369 - enabled no-holes (-O no-holes) 00:07:18.369 - enabled free-space-tree (-R free-space-tree) 00:07:18.369 00:07:18.369 Label: (null) 00:07:18.369 UUID: e5b621c2-a3dd-4e5e-a79a-9e93d1d06a2e 00:07:18.369 Node size: 16384 00:07:18.369 Sector size: 4096 00:07:18.369 Filesystem size: 510.00MiB 00:07:18.369 Block group profiles: 00:07:18.369 Data: single 8.00MiB 00:07:18.369 Metadata: DUP 32.00MiB 00:07:18.369 System: DUP 8.00MiB 00:07:18.369 SSD detected: yes 00:07:18.369 Zoned device: no 00:07:18.369 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:18.369 Runtime features: free-space-tree 00:07:18.369 Checksum: crc32c 00:07:18.369 Number of devices: 1 00:07:18.369 Devices: 00:07:18.369 ID SIZE PATH 00:07:18.369 1 510.00MiB /dev/nvme0n1p1 00:07:18.369 00:07:18.627 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:18.627 01:12:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.222 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.222 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:19.222 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3235896 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:19.223 00:07:19.223 real 0m1.131s 00:07:19.223 user 0m0.035s 00:07:19.223 sys 0m0.116s 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:19.223 ************************************ 00:07:19.223 END TEST filesystem_in_capsule_btrfs 00:07:19.223 ************************************ 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.223 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.481 ************************************ 00:07:19.481 START TEST filesystem_in_capsule_xfs 00:07:19.481 ************************************ 00:07:19.481 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:19.481 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:19.481 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:19.481 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:19.481 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:19.481 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:19.481 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:19.481 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:19.481 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:19.481 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:19.481 01:12:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:19.481 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:19.481 = sectsz=512 attr=2, projid32bit=1 00:07:19.481 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:19.481 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:19.481 data = bsize=4096 blocks=130560, imaxpct=25 00:07:19.481 = sunit=0 swidth=0 blks 00:07:19.481 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:19.481 log =internal log bsize=4096 blocks=16384, version=2 00:07:19.481 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:19.481 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:20.415 Discarding blocks...Done. 00:07:20.415 01:12:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:20.415 01:12:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3235896 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:22.945 00:07:22.945 real 0m3.156s 00:07:22.945 user 0m0.024s 00:07:22.945 sys 0m0.072s 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:22.945 ************************************ 00:07:22.945 END TEST filesystem_in_capsule_xfs 00:07:22.945 ************************************ 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:22.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3235896 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3235896 ']' 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3235896 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3235896 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3235896' 00:07:22.945 killing process with pid 3235896 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3235896 00:07:22.945 01:12:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3235896 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:23.204 00:07:23.204 real 0m12.499s 00:07:23.204 user 0m49.117s 00:07:23.204 sys 0m1.206s 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:23.204 ************************************ 00:07:23.204 END TEST nvmf_filesystem_in_capsule 00:07:23.204 ************************************ 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:23.204 rmmod nvme_tcp 00:07:23.204 rmmod nvme_fabrics 00:07:23.204 rmmod nvme_keyring 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.204 01:12:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.738 01:12:51 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:25.738 00:07:25.738 real 0m32.231s 00:07:25.738 user 1m37.891s 00:07:25.738 sys 0m6.438s 00:07:25.738 01:12:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.738 01:12:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.738 ************************************ 00:07:25.738 END TEST nvmf_filesystem 00:07:25.738 ************************************ 00:07:25.738 01:12:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:25.738 01:12:51 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:25.738 01:12:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:25.738 01:12:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.738 01:12:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.738 ************************************ 00:07:25.738 START TEST nvmf_target_discovery 00:07:25.738 ************************************ 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:25.738 * Looking for test storage... 00:07:25.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:25.738 01:12:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:31.007 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:31.007 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:31.007 Found net devices under 0000:86:00.0: cvl_0_0 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:31.007 Found net devices under 0000:86:00.1: cvl_0_1 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:31.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:07:31.007 00:07:31.007 --- 10.0.0.2 ping statistics --- 00:07:31.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.007 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:07:31.007 00:07:31.007 --- 10.0.0.1 ping statistics --- 00:07:31.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.007 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.007 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3241683 00:07:31.008 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3241683 00:07:31.008 01:12:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.008 01:12:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3241683 ']' 00:07:31.008 01:12:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.008 01:12:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.008 01:12:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.008 01:12:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.008 01:12:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.008 [2024-07-16 01:12:56.780483] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:07:31.008 [2024-07-16 01:12:56.780533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.008 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.008 [2024-07-16 01:12:56.838268] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.008 [2024-07-16 01:12:56.917334] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.008 [2024-07-16 01:12:56.917373] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.008 [2024-07-16 01:12:56.917380] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.008 [2024-07-16 01:12:56.917385] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.008 [2024-07-16 01:12:56.917391] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.008 [2024-07-16 01:12:56.917449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.008 [2024-07-16 01:12:56.917537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.008 [2024-07-16 01:12:56.917629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.008 [2024-07-16 01:12:56.917630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.940 [2024-07-16 01:12:57.621270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.940 Null1 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.940 [2024-07-16 01:12:57.666640] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.940 Null2 00:07:31.940 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 Null3 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 Null4 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:31.941 00:07:31.941 Discovery Log Number of Records 6, Generation counter 6 00:07:31.941 =====Discovery Log Entry 0====== 00:07:31.941 trtype: tcp 00:07:31.941 adrfam: ipv4 00:07:31.941 subtype: current discovery subsystem 00:07:31.941 treq: not required 00:07:31.941 portid: 0 00:07:31.941 trsvcid: 4420 00:07:31.941 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:31.941 traddr: 10.0.0.2 00:07:31.941 eflags: explicit discovery connections, duplicate discovery information 00:07:31.941 sectype: none 00:07:31.941 =====Discovery Log Entry 1====== 00:07:31.941 trtype: tcp 00:07:31.941 adrfam: ipv4 00:07:31.941 subtype: nvme subsystem 00:07:31.941 treq: not required 00:07:31.941 portid: 0 00:07:31.941 trsvcid: 4420 00:07:31.941 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:31.941 traddr: 10.0.0.2 00:07:31.941 eflags: none 00:07:31.941 sectype: none 00:07:31.941 =====Discovery Log Entry 2====== 00:07:31.941 trtype: tcp 00:07:31.941 adrfam: ipv4 00:07:31.941 subtype: nvme subsystem 00:07:31.941 treq: not required 00:07:31.941 portid: 0 00:07:31.941 trsvcid: 4420 00:07:31.941 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:31.941 traddr: 10.0.0.2 00:07:31.941 eflags: none 00:07:31.941 sectype: none 00:07:31.941 =====Discovery Log Entry 3====== 00:07:31.941 trtype: tcp 00:07:31.941 adrfam: ipv4 00:07:31.941 subtype: nvme subsystem 00:07:31.941 treq: not required 00:07:31.941 portid: 0 00:07:31.941 trsvcid: 4420 00:07:31.941 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:31.941 traddr: 10.0.0.2 00:07:31.941 eflags: none 00:07:31.941 sectype: none 00:07:31.941 =====Discovery Log Entry 4====== 00:07:31.941 trtype: tcp 00:07:31.941 adrfam: ipv4 00:07:31.941 subtype: nvme subsystem 00:07:31.941 treq: not required 00:07:31.941 portid: 0 00:07:31.941 trsvcid: 4420 00:07:31.941 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:31.941 traddr: 10.0.0.2 00:07:31.941 eflags: none 00:07:31.941 sectype: none 00:07:31.941 =====Discovery Log Entry 5====== 00:07:31.941 trtype: tcp 00:07:31.941 adrfam: ipv4 00:07:31.941 subtype: discovery subsystem referral 00:07:31.941 treq: not required 00:07:31.941 portid: 0 00:07:31.941 trsvcid: 4430 00:07:31.941 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:31.941 traddr: 10.0.0.2 00:07:31.941 eflags: none 00:07:31.941 sectype: none 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:31.941 Perform nvmf subsystem discovery via RPC 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.941 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.941 [ 00:07:31.941 { 00:07:31.941 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:31.941 "subtype": "Discovery", 00:07:31.941 "listen_addresses": [ 00:07:31.941 { 00:07:31.941 "trtype": "TCP", 00:07:31.941 "adrfam": "IPv4", 00:07:31.941 "traddr": "10.0.0.2", 00:07:31.941 "trsvcid": "4420" 00:07:31.941 } 00:07:31.941 ], 00:07:31.941 "allow_any_host": true, 00:07:31.941 "hosts": [] 00:07:31.941 }, 00:07:31.941 { 00:07:31.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:31.941 "subtype": "NVMe", 00:07:31.941 "listen_addresses": [ 00:07:31.941 { 00:07:31.941 "trtype": "TCP", 00:07:31.941 "adrfam": "IPv4", 00:07:31.941 "traddr": "10.0.0.2", 00:07:31.941 "trsvcid": "4420" 00:07:31.941 } 00:07:31.941 ], 00:07:31.941 "allow_any_host": true, 00:07:31.941 "hosts": [], 00:07:31.941 "serial_number": "SPDK00000000000001", 00:07:31.941 "model_number": "SPDK bdev Controller", 00:07:31.941 "max_namespaces": 32, 00:07:31.941 "min_cntlid": 1, 00:07:31.941 "max_cntlid": 65519, 00:07:31.941 "namespaces": [ 00:07:31.941 { 00:07:31.941 "nsid": 1, 00:07:31.941 "bdev_name": "Null1", 00:07:31.941 "name": "Null1", 00:07:31.941 "nguid": "7EA9FB3FD6C949478E762007979437A0", 00:07:31.941 "uuid": "7ea9fb3f-d6c9-4947-8e76-2007979437a0" 00:07:31.941 } 00:07:31.941 ] 00:07:31.941 }, 00:07:31.941 { 00:07:31.941 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:31.941 "subtype": "NVMe", 00:07:31.941 "listen_addresses": [ 00:07:31.941 { 00:07:31.941 "trtype": "TCP", 00:07:31.941 "adrfam": "IPv4", 00:07:31.941 "traddr": "10.0.0.2", 00:07:31.941 "trsvcid": "4420" 00:07:31.941 } 00:07:31.941 ], 00:07:31.941 "allow_any_host": true, 00:07:31.941 "hosts": [], 00:07:31.941 "serial_number": "SPDK00000000000002", 00:07:31.941 "model_number": "SPDK bdev Controller", 00:07:31.941 "max_namespaces": 32, 00:07:31.941 "min_cntlid": 1, 00:07:31.941 "max_cntlid": 65519, 00:07:31.941 "namespaces": [ 00:07:31.941 { 00:07:31.941 "nsid": 1, 00:07:31.941 "bdev_name": "Null2", 00:07:31.941 "name": "Null2", 00:07:31.941 "nguid": "B8AE433FEB214731B4D5BD91A909A060", 00:07:31.941 "uuid": "b8ae433f-eb21-4731-b4d5-bd91a909a060" 00:07:31.941 } 00:07:31.941 ] 00:07:31.941 }, 00:07:31.941 { 00:07:31.941 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:31.941 "subtype": "NVMe", 00:07:31.941 "listen_addresses": [ 00:07:31.941 { 00:07:31.942 "trtype": "TCP", 00:07:31.942 "adrfam": "IPv4", 00:07:31.942 "traddr": "10.0.0.2", 00:07:31.942 "trsvcid": "4420" 00:07:31.942 } 00:07:31.942 ], 00:07:31.942 "allow_any_host": true, 00:07:31.942 "hosts": [], 00:07:31.942 "serial_number": "SPDK00000000000003", 00:07:31.942 "model_number": "SPDK bdev Controller", 00:07:31.942 "max_namespaces": 32, 00:07:31.942 "min_cntlid": 1, 00:07:31.942 "max_cntlid": 65519, 00:07:31.942 "namespaces": [ 00:07:31.942 { 00:07:31.942 "nsid": 1, 00:07:31.942 "bdev_name": "Null3", 00:07:31.942 "name": "Null3", 00:07:31.942 "nguid": "4D739D5CF27047AB95ABA19C0375FEF6", 00:07:31.942 "uuid": "4d739d5c-f270-47ab-95ab-a19c0375fef6" 00:07:31.942 } 00:07:31.942 ] 00:07:31.942 }, 00:07:31.942 { 00:07:31.942 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:31.942 "subtype": "NVMe", 00:07:31.942 "listen_addresses": [ 00:07:31.942 { 00:07:31.942 "trtype": "TCP", 00:07:31.942 "adrfam": "IPv4", 00:07:31.942 "traddr": "10.0.0.2", 00:07:31.942 "trsvcid": "4420" 00:07:31.942 } 00:07:31.942 ], 00:07:31.942 "allow_any_host": true, 00:07:31.942 "hosts": [], 00:07:31.942 "serial_number": "SPDK00000000000004", 00:07:31.942 "model_number": "SPDK bdev Controller", 00:07:31.942 "max_namespaces": 32, 00:07:31.942 "min_cntlid": 1, 00:07:31.942 "max_cntlid": 65519, 00:07:31.942 "namespaces": [ 00:07:31.942 { 00:07:31.942 "nsid": 1, 00:07:31.942 "bdev_name": "Null4", 00:07:31.942 "name": "Null4", 00:07:31.942 "nguid": "771696590CFB40A2B1A2C6B3118B3088", 00:07:31.942 "uuid": "77169659-0cfb-40a2-b1a2-c6b3118b3088" 00:07:31.942 } 00:07:31.942 ] 00:07:31.942 } 00:07:31.942 ] 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.942 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:32.199 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.199 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:32.199 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:32.199 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.199 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:32.199 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.199 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:32.200 01:12:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.200 rmmod nvme_tcp 00:07:32.200 rmmod nvme_fabrics 00:07:32.200 rmmod nvme_keyring 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3241683 ']' 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3241683 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3241683 ']' 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3241683 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3241683 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3241683' 00:07:32.200 killing process with pid 3241683 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3241683 00:07:32.200 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3241683 00:07:32.478 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.478 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.478 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.478 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.478 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.478 01:12:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.478 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.478 01:12:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.422 01:13:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.422 00:07:34.422 real 0m9.136s 00:07:34.422 user 0m7.203s 00:07:34.422 sys 0m4.372s 00:07:34.422 01:13:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.422 01:13:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:34.422 ************************************ 00:07:34.422 END TEST nvmf_target_discovery 00:07:34.422 ************************************ 00:07:34.422 01:13:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:34.422 01:13:00 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:34.422 01:13:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:34.422 01:13:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.422 01:13:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:34.679 ************************************ 00:07:34.679 START TEST nvmf_referrals 00:07:34.679 ************************************ 00:07:34.679 01:13:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:34.679 * Looking for test storage... 00:07:34.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.679 01:13:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.679 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:34.679 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.679 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.680 01:13:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.947 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:39.948 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:39.948 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:39.948 Found net devices under 0000:86:00.0: cvl_0_0 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:39.948 Found net devices under 0000:86:00.1: cvl_0_1 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:39.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:07:39.948 00:07:39.948 --- 10.0.0.2 ping statistics --- 00:07:39.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.948 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:07:39.948 00:07:39.948 --- 10.0.0.1 ping statistics --- 00:07:39.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.948 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:39.948 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3245266 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3245266 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3245266 ']' 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.207 01:13:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:40.207 [2024-07-16 01:13:05.988692] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:07:40.208 [2024-07-16 01:13:05.988736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.208 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.208 [2024-07-16 01:13:06.049902] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.208 [2024-07-16 01:13:06.129467] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.208 [2024-07-16 01:13:06.129502] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.208 [2024-07-16 01:13:06.129509] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.208 [2024-07-16 01:13:06.129515] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.208 [2024-07-16 01:13:06.129523] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.208 [2024-07-16 01:13:06.129594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.208 [2024-07-16 01:13:06.129711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.208 [2024-07-16 01:13:06.129798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.208 [2024-07-16 01:13:06.129799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.142 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.142 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:41.142 01:13:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.142 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.142 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.142 01:13:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.143 [2024-07-16 01:13:06.828088] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.143 [2024-07-16 01:13:06.841362] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:41.143 01:13:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:41.143 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:41.143 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:41.143 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:41.143 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.143 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:41.402 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:41.661 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:41.661 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:41.661 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:41.661 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:41.661 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:41.661 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.661 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:41.934 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:41.934 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:41.934 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:41.934 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:41.935 01:13:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.193 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:42.193 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:42.193 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:42.193 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:42.193 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:42.193 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.193 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:42.193 01:13:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:42.193 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:42.193 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:42.193 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:42.193 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:42.193 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:42.193 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.193 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:42.452 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:42.453 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.713 rmmod nvme_tcp 00:07:42.713 rmmod nvme_fabrics 00:07:42.713 rmmod nvme_keyring 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3245266 ']' 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3245266 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3245266 ']' 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3245266 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3245266 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3245266' 00:07:42.713 killing process with pid 3245266 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3245266 00:07:42.713 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3245266 00:07:42.971 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:42.971 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:42.971 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:42.971 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.971 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.971 01:13:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.971 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.971 01:13:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.505 01:13:10 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.505 00:07:45.505 real 0m10.485s 00:07:45.505 user 0m13.266s 00:07:45.506 sys 0m4.662s 00:07:45.506 01:13:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.506 01:13:10 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:45.506 ************************************ 00:07:45.506 END TEST nvmf_referrals 00:07:45.506 ************************************ 00:07:45.506 01:13:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:45.506 01:13:10 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:45.506 01:13:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:45.506 01:13:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.506 01:13:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.506 ************************************ 00:07:45.506 START TEST nvmf_connect_disconnect 00:07:45.506 ************************************ 00:07:45.506 01:13:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:45.506 * Looking for test storage... 00:07:45.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.506 01:13:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:50.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:50.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:50.770 Found net devices under 0000:86:00.0: cvl_0_0 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.770 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:50.771 Found net devices under 0000:86:00.1: cvl_0_1 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:50.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:07:50.771 00:07:50.771 --- 10.0.0.2 ping statistics --- 00:07:50.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.771 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:07:50.771 00:07:50.771 --- 10.0.0.1 ping statistics --- 00:07:50.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.771 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3249337 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3249337 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3249337 ']' 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.771 01:13:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:50.771 [2024-07-16 01:13:16.507329] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:07:50.771 [2024-07-16 01:13:16.507376] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.771 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.771 [2024-07-16 01:13:16.565548] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.771 [2024-07-16 01:13:16.645238] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.771 [2024-07-16 01:13:16.645275] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.771 [2024-07-16 01:13:16.645282] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.771 [2024-07-16 01:13:16.645288] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.771 [2024-07-16 01:13:16.645293] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.771 [2024-07-16 01:13:16.645345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.771 [2024-07-16 01:13:16.645415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.771 [2024-07-16 01:13:16.645503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.771 [2024-07-16 01:13:16.645504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.703 [2024-07-16 01:13:17.381265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.703 [2024-07-16 01:13:17.432904] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:51.703 01:13:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:54.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.155 rmmod nvme_tcp 00:08:08.155 rmmod nvme_fabrics 00:08:08.155 rmmod nvme_keyring 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3249337 ']' 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3249337 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3249337 ']' 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3249337 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3249337 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3249337' 00:08:08.155 killing process with pid 3249337 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3249337 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3249337 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.155 01:13:33 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.058 01:13:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.317 00:08:10.317 real 0m25.055s 00:08:10.317 user 1m10.484s 00:08:10.317 sys 0m5.163s 00:08:10.317 01:13:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.317 01:13:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:10.317 ************************************ 00:08:10.317 END TEST nvmf_connect_disconnect 00:08:10.317 ************************************ 00:08:10.317 01:13:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:10.317 01:13:36 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:10.317 01:13:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.317 01:13:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.317 01:13:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.317 ************************************ 00:08:10.317 START TEST nvmf_multitarget 00:08:10.317 ************************************ 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:10.317 * Looking for test storage... 00:08:10.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:10.317 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.318 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.318 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.318 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.318 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.318 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.318 01:13:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:10.318 01:13:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.318 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.318 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.318 01:13:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.318 01:13:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:15.614 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:15.614 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:15.614 Found net devices under 0000:86:00.0: cvl_0_0 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:15.614 Found net devices under 0000:86:00.1: cvl_0_1 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:15.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:08:15.614 00:08:15.614 --- 10.0.0.2 ping statistics --- 00:08:15.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.614 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:08:15.614 00:08:15.614 --- 10.0.0.1 ping statistics --- 00:08:15.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.614 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:15.614 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:15.615 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.615 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:15.615 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:15.896 01:13:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3255734 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3255734 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3255734 ']' 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.897 01:13:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:15.897 [2024-07-16 01:13:41.666543] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:08:15.897 [2024-07-16 01:13:41.666588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.897 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.897 [2024-07-16 01:13:41.724349] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.897 [2024-07-16 01:13:41.803896] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.897 [2024-07-16 01:13:41.803935] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.897 [2024-07-16 01:13:41.803942] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.897 [2024-07-16 01:13:41.803947] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.897 [2024-07-16 01:13:41.803952] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.897 [2024-07-16 01:13:41.803987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.897 [2024-07-16 01:13:41.804082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.897 [2024-07-16 01:13:41.804148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.897 [2024-07-16 01:13:41.804149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:16.832 "nvmf_tgt_1" 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:16.832 "nvmf_tgt_2" 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:16.832 01:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:17.091 01:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:17.091 01:13:42 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:17.091 true 00:08:17.091 01:13:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:17.354 true 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:17.354 rmmod nvme_tcp 00:08:17.354 rmmod nvme_fabrics 00:08:17.354 rmmod nvme_keyring 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3255734 ']' 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3255734 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3255734 ']' 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3255734 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3255734 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3255734' 00:08:17.354 killing process with pid 3255734 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3255734 00:08:17.354 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3255734 00:08:17.612 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.612 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:17.612 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:17.612 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.612 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:17.612 01:13:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.612 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.612 01:13:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.146 01:13:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:20.146 00:08:20.146 real 0m9.461s 00:08:20.146 user 0m9.024s 00:08:20.146 sys 0m4.516s 00:08:20.146 01:13:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.146 01:13:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:20.146 ************************************ 00:08:20.146 END TEST nvmf_multitarget 00:08:20.146 ************************************ 00:08:20.146 01:13:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:20.146 01:13:45 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:20.146 01:13:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:20.146 01:13:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.146 01:13:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:20.146 ************************************ 00:08:20.146 START TEST nvmf_rpc 00:08:20.146 ************************************ 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:20.146 * Looking for test storage... 00:08:20.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.146 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.147 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.147 01:13:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.147 01:13:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.147 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.147 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.147 01:13:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.147 01:13:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:25.410 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:25.410 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:25.410 Found net devices under 0000:86:00.0: cvl_0_0 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.410 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:25.411 Found net devices under 0000:86:00.1: cvl_0_1 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:08:25.411 00:08:25.411 --- 10.0.0.2 ping statistics --- 00:08:25.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.411 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:08:25.411 00:08:25.411 --- 10.0.0.1 ping statistics --- 00:08:25.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.411 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3259511 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3259511 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3259511 ']' 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.411 01:13:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.411 [2024-07-16 01:13:51.042112] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:08:25.411 [2024-07-16 01:13:51.042157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.411 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.411 [2024-07-16 01:13:51.099353] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.411 [2024-07-16 01:13:51.178679] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.411 [2024-07-16 01:13:51.178714] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.411 [2024-07-16 01:13:51.178721] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.411 [2024-07-16 01:13:51.178727] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.411 [2024-07-16 01:13:51.178732] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.411 [2024-07-16 01:13:51.178778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.411 [2024-07-16 01:13:51.178875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.411 [2024-07-16 01:13:51.178961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.411 [2024-07-16 01:13:51.178962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:25.977 "tick_rate": 2100000000, 00:08:25.977 "poll_groups": [ 00:08:25.977 { 00:08:25.977 "name": "nvmf_tgt_poll_group_000", 00:08:25.977 "admin_qpairs": 0, 00:08:25.977 "io_qpairs": 0, 00:08:25.977 "current_admin_qpairs": 0, 00:08:25.977 "current_io_qpairs": 0, 00:08:25.977 "pending_bdev_io": 0, 00:08:25.977 "completed_nvme_io": 0, 00:08:25.977 "transports": [] 00:08:25.977 }, 00:08:25.977 { 00:08:25.977 "name": "nvmf_tgt_poll_group_001", 00:08:25.977 "admin_qpairs": 0, 00:08:25.977 "io_qpairs": 0, 00:08:25.977 "current_admin_qpairs": 0, 00:08:25.977 "current_io_qpairs": 0, 00:08:25.977 "pending_bdev_io": 0, 00:08:25.977 "completed_nvme_io": 0, 00:08:25.977 "transports": [] 00:08:25.977 }, 00:08:25.977 { 00:08:25.977 "name": "nvmf_tgt_poll_group_002", 00:08:25.977 "admin_qpairs": 0, 00:08:25.977 "io_qpairs": 0, 00:08:25.977 "current_admin_qpairs": 0, 00:08:25.977 "current_io_qpairs": 0, 00:08:25.977 "pending_bdev_io": 0, 00:08:25.977 "completed_nvme_io": 0, 00:08:25.977 "transports": [] 00:08:25.977 }, 00:08:25.977 { 00:08:25.977 "name": "nvmf_tgt_poll_group_003", 00:08:25.977 "admin_qpairs": 0, 00:08:25.977 "io_qpairs": 0, 00:08:25.977 "current_admin_qpairs": 0, 00:08:25.977 "current_io_qpairs": 0, 00:08:25.977 "pending_bdev_io": 0, 00:08:25.977 "completed_nvme_io": 0, 00:08:25.977 "transports": [] 00:08:25.977 } 00:08:25.977 ] 00:08:25.977 }' 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:25.977 01:13:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:26.235 01:13:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:26.235 01:13:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.235 01:13:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.235 01:13:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.235 [2024-07-16 01:13:52.007630] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:26.235 "tick_rate": 2100000000, 00:08:26.235 "poll_groups": [ 00:08:26.235 { 00:08:26.235 "name": "nvmf_tgt_poll_group_000", 00:08:26.235 "admin_qpairs": 0, 00:08:26.235 "io_qpairs": 0, 00:08:26.235 "current_admin_qpairs": 0, 00:08:26.235 "current_io_qpairs": 0, 00:08:26.235 "pending_bdev_io": 0, 00:08:26.235 "completed_nvme_io": 0, 00:08:26.235 "transports": [ 00:08:26.235 { 00:08:26.235 "trtype": "TCP" 00:08:26.235 } 00:08:26.235 ] 00:08:26.235 }, 00:08:26.235 { 00:08:26.235 "name": "nvmf_tgt_poll_group_001", 00:08:26.235 "admin_qpairs": 0, 00:08:26.235 "io_qpairs": 0, 00:08:26.235 "current_admin_qpairs": 0, 00:08:26.235 "current_io_qpairs": 0, 00:08:26.235 "pending_bdev_io": 0, 00:08:26.235 "completed_nvme_io": 0, 00:08:26.235 "transports": [ 00:08:26.235 { 00:08:26.235 "trtype": "TCP" 00:08:26.235 } 00:08:26.235 ] 00:08:26.235 }, 00:08:26.235 { 00:08:26.235 "name": "nvmf_tgt_poll_group_002", 00:08:26.235 "admin_qpairs": 0, 00:08:26.235 "io_qpairs": 0, 00:08:26.235 "current_admin_qpairs": 0, 00:08:26.235 "current_io_qpairs": 0, 00:08:26.235 "pending_bdev_io": 0, 00:08:26.235 "completed_nvme_io": 0, 00:08:26.235 "transports": [ 00:08:26.235 { 00:08:26.235 "trtype": "TCP" 00:08:26.235 } 00:08:26.235 ] 00:08:26.235 }, 00:08:26.235 { 00:08:26.235 "name": "nvmf_tgt_poll_group_003", 00:08:26.235 "admin_qpairs": 0, 00:08:26.235 "io_qpairs": 0, 00:08:26.235 "current_admin_qpairs": 0, 00:08:26.235 "current_io_qpairs": 0, 00:08:26.235 "pending_bdev_io": 0, 00:08:26.235 "completed_nvme_io": 0, 00:08:26.235 "transports": [ 00:08:26.235 { 00:08:26.235 "trtype": "TCP" 00:08:26.235 } 00:08:26.235 ] 00:08:26.235 } 00:08:26.235 ] 00:08:26.235 }' 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.235 Malloc1 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.235 [2024-07-16 01:13:52.179567] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:26.235 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:26.235 [2024-07-16 01:13:52.204067] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:08:26.491 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:26.491 could not add new controller: failed to write to nvme-fabrics device 00:08:26.491 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:26.491 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:26.491 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:26.491 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:26.491 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:26.491 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.491 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:26.491 01:13:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.491 01:13:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:27.425 01:13:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:27.425 01:13:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:27.425 01:13:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.425 01:13:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:27.425 01:13:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:29.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:29.951 [2024-07-16 01:13:55.545290] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:08:29.951 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:29.951 could not add new controller: failed to write to nvme-fabrics device 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.951 01:13:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:30.886 01:13:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:30.886 01:13:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:30.886 01:13:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:30.886 01:13:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:30.886 01:13:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:32.787 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:32.787 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:32.787 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:32.787 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:32.787 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:32.787 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:32.787 01:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:33.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.045 [2024-07-16 01:13:58.840621] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.045 01:13:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:34.419 01:14:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:34.419 01:14:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:34.419 01:14:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:34.419 01:14:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:34.419 01:14:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:36.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.318 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.319 [2024-07-16 01:14:02.173991] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.319 01:14:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:37.693 01:14:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:37.693 01:14:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:37.693 01:14:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:37.693 01:14:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:37.693 01:14:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:39.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.594 [2024-07-16 01:14:05.530772] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.594 01:14:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:40.969 01:14:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:40.969 01:14:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:40.969 01:14:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:40.969 01:14:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:40.969 01:14:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.870 [2024-07-16 01:14:08.852011] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.870 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.128 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.128 01:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:43.128 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.128 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.128 01:14:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.128 01:14:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:44.063 01:14:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:44.063 01:14:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:44.063 01:14:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:44.063 01:14:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:44.063 01:14:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:45.965 01:14:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:45.965 01:14:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:45.965 01:14:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:46.224 01:14:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:46.224 01:14:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:46.224 01:14:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:46.224 01:14:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:46.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.224 [2024-07-16 01:14:12.114194] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.224 01:14:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:47.593 01:14:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:47.593 01:14:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:47.593 01:14:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:47.593 01:14:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:47.593 01:14:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:49.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.526 [2024-07-16 01:14:15.496089] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:49.526 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.527 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 [2024-07-16 01:14:15.544201] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.785 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 [2024-07-16 01:14:15.596389] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 [2024-07-16 01:14:15.644543] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 [2024-07-16 01:14:15.692706] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:49.786 "tick_rate": 2100000000, 00:08:49.786 "poll_groups": [ 00:08:49.786 { 00:08:49.786 "name": "nvmf_tgt_poll_group_000", 00:08:49.786 "admin_qpairs": 2, 00:08:49.786 "io_qpairs": 168, 00:08:49.786 "current_admin_qpairs": 0, 00:08:49.786 "current_io_qpairs": 0, 00:08:49.786 "pending_bdev_io": 0, 00:08:49.786 "completed_nvme_io": 242, 00:08:49.786 "transports": [ 00:08:49.786 { 00:08:49.786 "trtype": "TCP" 00:08:49.786 } 00:08:49.786 ] 00:08:49.786 }, 00:08:49.786 { 00:08:49.786 "name": "nvmf_tgt_poll_group_001", 00:08:49.786 "admin_qpairs": 2, 00:08:49.786 "io_qpairs": 168, 00:08:49.786 "current_admin_qpairs": 0, 00:08:49.786 "current_io_qpairs": 0, 00:08:49.786 "pending_bdev_io": 0, 00:08:49.786 "completed_nvme_io": 281, 00:08:49.786 "transports": [ 00:08:49.786 { 00:08:49.786 "trtype": "TCP" 00:08:49.786 } 00:08:49.786 ] 00:08:49.786 }, 00:08:49.786 { 00:08:49.786 "name": "nvmf_tgt_poll_group_002", 00:08:49.786 "admin_qpairs": 1, 00:08:49.786 "io_qpairs": 168, 00:08:49.786 "current_admin_qpairs": 0, 00:08:49.786 "current_io_qpairs": 0, 00:08:49.786 "pending_bdev_io": 0, 00:08:49.786 "completed_nvme_io": 245, 00:08:49.786 "transports": [ 00:08:49.786 { 00:08:49.786 "trtype": "TCP" 00:08:49.786 } 00:08:49.786 ] 00:08:49.786 }, 00:08:49.786 { 00:08:49.786 "name": "nvmf_tgt_poll_group_003", 00:08:49.786 "admin_qpairs": 2, 00:08:49.786 "io_qpairs": 168, 00:08:49.786 "current_admin_qpairs": 0, 00:08:49.786 "current_io_qpairs": 0, 00:08:49.786 "pending_bdev_io": 0, 00:08:49.786 "completed_nvme_io": 254, 00:08:49.786 "transports": [ 00:08:49.786 { 00:08:49.786 "trtype": "TCP" 00:08:49.786 } 00:08:49.786 ] 00:08:49.786 } 00:08:49.786 ] 00:08:49.786 }' 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:49.786 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.045 rmmod nvme_tcp 00:08:50.045 rmmod nvme_fabrics 00:08:50.045 rmmod nvme_keyring 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3259511 ']' 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3259511 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3259511 ']' 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3259511 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3259511 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3259511' 00:08:50.045 killing process with pid 3259511 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3259511 00:08:50.045 01:14:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3259511 00:08:50.304 01:14:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.304 01:14:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.304 01:14:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.304 01:14:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.304 01:14:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.304 01:14:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.304 01:14:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.304 01:14:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.831 01:14:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.831 00:08:52.831 real 0m32.549s 00:08:52.831 user 1m41.145s 00:08:52.831 sys 0m5.653s 00:08:52.831 01:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.831 01:14:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:52.831 ************************************ 00:08:52.831 END TEST nvmf_rpc 00:08:52.831 ************************************ 00:08:52.831 01:14:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:52.831 01:14:18 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:52.831 01:14:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:52.831 01:14:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.831 01:14:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.831 ************************************ 00:08:52.831 START TEST nvmf_invalid 00:08:52.831 ************************************ 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:52.831 * Looking for test storage... 00:08:52.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.831 01:14:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.832 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.832 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.832 01:14:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.832 01:14:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:58.097 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:58.098 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:58.098 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:58.098 Found net devices under 0000:86:00.0: cvl_0_0 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:58.098 Found net devices under 0000:86:00.1: cvl_0_1 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:58.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:08:58.098 00:08:58.098 --- 10.0.0.2 ping statistics --- 00:08:58.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.098 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:08:58.098 00:08:58.098 --- 10.0.0.1 ping statistics --- 00:08:58.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.098 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3267310 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3267310 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3267310 ']' 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.098 01:14:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:58.098 [2024-07-16 01:14:23.701583] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:08:58.098 [2024-07-16 01:14:23.701626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.098 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.099 [2024-07-16 01:14:23.760745] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.099 [2024-07-16 01:14:23.838007] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.099 [2024-07-16 01:14:23.838048] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.099 [2024-07-16 01:14:23.838055] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.099 [2024-07-16 01:14:23.838061] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.099 [2024-07-16 01:14:23.838066] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.099 [2024-07-16 01:14:23.838110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.099 [2024-07-16 01:14:23.838207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.099 [2024-07-16 01:14:23.838301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.099 [2024-07-16 01:14:23.838303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.665 01:14:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.665 01:14:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:58.665 01:14:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.665 01:14:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.665 01:14:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:58.665 01:14:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.665 01:14:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:58.665 01:14:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28385 00:08:58.923 [2024-07-16 01:14:24.679707] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:58.923 01:14:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:58.923 { 00:08:58.923 "nqn": "nqn.2016-06.io.spdk:cnode28385", 00:08:58.923 "tgt_name": "foobar", 00:08:58.923 "method": "nvmf_create_subsystem", 00:08:58.923 "req_id": 1 00:08:58.923 } 00:08:58.923 Got JSON-RPC error response 00:08:58.923 response: 00:08:58.923 { 00:08:58.923 "code": -32603, 00:08:58.923 "message": "Unable to find target foobar" 00:08:58.923 }' 00:08:58.923 01:14:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:58.923 { 00:08:58.923 "nqn": "nqn.2016-06.io.spdk:cnode28385", 00:08:58.923 "tgt_name": "foobar", 00:08:58.923 "method": "nvmf_create_subsystem", 00:08:58.923 "req_id": 1 00:08:58.923 } 00:08:58.923 Got JSON-RPC error response 00:08:58.923 response: 00:08:58.923 { 00:08:58.923 "code": -32603, 00:08:58.923 "message": "Unable to find target foobar" 00:08:58.923 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:58.923 01:14:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:58.923 01:14:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19373 00:08:58.923 [2024-07-16 01:14:24.880412] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19373: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:59.181 01:14:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:59.181 { 00:08:59.181 "nqn": "nqn.2016-06.io.spdk:cnode19373", 00:08:59.181 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:59.181 "method": "nvmf_create_subsystem", 00:08:59.181 "req_id": 1 00:08:59.181 } 00:08:59.181 Got JSON-RPC error response 00:08:59.181 response: 00:08:59.181 { 00:08:59.181 "code": -32602, 00:08:59.181 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:59.181 }' 00:08:59.181 01:14:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:59.181 { 00:08:59.181 "nqn": "nqn.2016-06.io.spdk:cnode19373", 00:08:59.181 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:59.181 "method": "nvmf_create_subsystem", 00:08:59.181 "req_id": 1 00:08:59.181 } 00:08:59.181 Got JSON-RPC error response 00:08:59.181 response: 00:08:59.181 { 00:08:59.181 "code": -32602, 00:08:59.181 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:59.181 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:59.181 01:14:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:59.181 01:14:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1134 00:08:59.181 [2024-07-16 01:14:25.085084] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1134: invalid model number 'SPDK_Controller' 00:08:59.181 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:59.181 { 00:08:59.181 "nqn": "nqn.2016-06.io.spdk:cnode1134", 00:08:59.181 "model_number": "SPDK_Controller\u001f", 00:08:59.181 "method": "nvmf_create_subsystem", 00:08:59.181 "req_id": 1 00:08:59.181 } 00:08:59.181 Got JSON-RPC error response 00:08:59.181 response: 00:08:59.181 { 00:08:59.181 "code": -32602, 00:08:59.181 "message": "Invalid MN SPDK_Controller\u001f" 00:08:59.181 }' 00:08:59.181 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:59.181 { 00:08:59.181 "nqn": "nqn.2016-06.io.spdk:cnode1134", 00:08:59.181 "model_number": "SPDK_Controller\u001f", 00:08:59.181 "method": "nvmf_create_subsystem", 00:08:59.181 "req_id": 1 00:08:59.181 } 00:08:59.181 Got JSON-RPC error response 00:08:59.181 response: 00:08:59.181 { 00:08:59.181 "code": -32602, 00:08:59.181 "message": "Invalid MN SPDK_Controller\u001f" 00:08:59.181 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:59.181 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:59.181 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:59.181 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:59.181 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:59.181 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:59.181 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:59.181 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.181 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:59.181 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.182 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.441 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '^{p'\''xDyC)GR"kTc#epGI;' 00:08:59.442 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '^{p'\''xDyC)GR"kTc#epGI;' nqn.2016-06.io.spdk:cnode17724 00:08:59.442 [2024-07-16 01:14:25.402136] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17724: invalid serial number '^{p'xDyC)GR"kTc#epGI;' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:59.702 { 00:08:59.702 "nqn": "nqn.2016-06.io.spdk:cnode17724", 00:08:59.702 "serial_number": "^{p'\''xDyC)GR\"kTc#epGI;", 00:08:59.702 "method": "nvmf_create_subsystem", 00:08:59.702 "req_id": 1 00:08:59.702 } 00:08:59.702 Got JSON-RPC error response 00:08:59.702 response: 00:08:59.702 { 00:08:59.702 "code": -32602, 00:08:59.702 "message": "Invalid SN ^{p'\''xDyC)GR\"kTc#epGI;" 00:08:59.702 }' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:59.702 { 00:08:59.702 "nqn": "nqn.2016-06.io.spdk:cnode17724", 00:08:59.702 "serial_number": "^{p'xDyC)GR\"kTc#epGI;", 00:08:59.702 "method": "nvmf_create_subsystem", 00:08:59.702 "req_id": 1 00:08:59.702 } 00:08:59.702 Got JSON-RPC error response 00:08:59.702 response: 00:08:59.702 { 00:08:59.702 "code": -32602, 00:08:59.702 "message": "Invalid SN ^{p'xDyC)GR\"kTc#epGI;" 00:08:59.702 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.702 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:59.703 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:59.962 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:59.962 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:59.962 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:59.962 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:08:59.962 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '),L=44%wViFOJ{N:^c@b #'\''_@}C`{10TjfcBE"'\''oJ' 00:08:59.962 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '),L=44%wViFOJ{N:^c@b #'\''_@}C`{10TjfcBE"'\''oJ' nqn.2016-06.io.spdk:cnode30104 00:08:59.962 [2024-07-16 01:14:25.839604] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30104: invalid model number '),L=44%wViFOJ{N:^c@b #'_@}C`{10TjfcBE"'oJ' 00:08:59.962 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:59.962 { 00:08:59.962 "nqn": "nqn.2016-06.io.spdk:cnode30104", 00:08:59.962 "model_number": "),L=44%wViFOJ{N:^c@b #'\''_@}C`{10TjfcBE\"'\''oJ", 00:08:59.962 "method": "nvmf_create_subsystem", 00:08:59.962 "req_id": 1 00:08:59.962 } 00:08:59.962 Got JSON-RPC error response 00:08:59.962 response: 00:08:59.962 { 00:08:59.962 "code": -32602, 00:08:59.962 "message": "Invalid MN ),L=44%wViFOJ{N:^c@b #'\''_@}C`{10TjfcBE\"'\''oJ" 00:08:59.962 }' 00:08:59.962 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:59.962 { 00:08:59.962 "nqn": "nqn.2016-06.io.spdk:cnode30104", 00:08:59.962 "model_number": "),L=44%wViFOJ{N:^c@b #'_@}C`{10TjfcBE\"'oJ", 00:08:59.962 "method": "nvmf_create_subsystem", 00:08:59.962 "req_id": 1 00:08:59.962 } 00:08:59.962 Got JSON-RPC error response 00:08:59.962 response: 00:08:59.962 { 00:08:59.962 "code": -32602, 00:08:59.962 "message": "Invalid MN ),L=44%wViFOJ{N:^c@b #'_@}C`{10TjfcBE\"'oJ" 00:08:59.962 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:59.962 01:14:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:00.220 [2024-07-16 01:14:26.020262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.220 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:00.478 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:00.478 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:00.478 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:00.478 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:00.478 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:00.478 [2024-07-16 01:14:26.418933] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:00.478 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:00.478 { 00:09:00.478 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:00.478 "listen_address": { 00:09:00.478 "trtype": "tcp", 00:09:00.478 "traddr": "", 00:09:00.478 "trsvcid": "4421" 00:09:00.478 }, 00:09:00.478 "method": "nvmf_subsystem_remove_listener", 00:09:00.478 "req_id": 1 00:09:00.478 } 00:09:00.478 Got JSON-RPC error response 00:09:00.478 response: 00:09:00.478 { 00:09:00.478 "code": -32602, 00:09:00.478 "message": "Invalid parameters" 00:09:00.478 }' 00:09:00.478 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:00.478 { 00:09:00.478 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:00.478 "listen_address": { 00:09:00.478 "trtype": "tcp", 00:09:00.478 "traddr": "", 00:09:00.478 "trsvcid": "4421" 00:09:00.478 }, 00:09:00.478 "method": "nvmf_subsystem_remove_listener", 00:09:00.478 "req_id": 1 00:09:00.478 } 00:09:00.478 Got JSON-RPC error response 00:09:00.478 response: 00:09:00.478 { 00:09:00.479 "code": -32602, 00:09:00.479 "message": "Invalid parameters" 00:09:00.479 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:00.479 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21472 -i 0 00:09:00.737 [2024-07-16 01:14:26.603518] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21472: invalid cntlid range [0-65519] 00:09:00.737 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:00.737 { 00:09:00.737 "nqn": "nqn.2016-06.io.spdk:cnode21472", 00:09:00.737 "min_cntlid": 0, 00:09:00.737 "method": "nvmf_create_subsystem", 00:09:00.737 "req_id": 1 00:09:00.737 } 00:09:00.737 Got JSON-RPC error response 00:09:00.737 response: 00:09:00.737 { 00:09:00.737 "code": -32602, 00:09:00.737 "message": "Invalid cntlid range [0-65519]" 00:09:00.737 }' 00:09:00.737 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:00.737 { 00:09:00.737 "nqn": "nqn.2016-06.io.spdk:cnode21472", 00:09:00.737 "min_cntlid": 0, 00:09:00.737 "method": "nvmf_create_subsystem", 00:09:00.737 "req_id": 1 00:09:00.737 } 00:09:00.737 Got JSON-RPC error response 00:09:00.737 response: 00:09:00.737 { 00:09:00.737 "code": -32602, 00:09:00.737 "message": "Invalid cntlid range [0-65519]" 00:09:00.737 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:00.737 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26011 -i 65520 00:09:00.996 [2024-07-16 01:14:26.784121] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26011: invalid cntlid range [65520-65519] 00:09:00.996 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:00.996 { 00:09:00.996 "nqn": "nqn.2016-06.io.spdk:cnode26011", 00:09:00.996 "min_cntlid": 65520, 00:09:00.996 "method": "nvmf_create_subsystem", 00:09:00.996 "req_id": 1 00:09:00.996 } 00:09:00.996 Got JSON-RPC error response 00:09:00.996 response: 00:09:00.996 { 00:09:00.996 "code": -32602, 00:09:00.996 "message": "Invalid cntlid range [65520-65519]" 00:09:00.996 }' 00:09:00.996 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:00.996 { 00:09:00.996 "nqn": "nqn.2016-06.io.spdk:cnode26011", 00:09:00.996 "min_cntlid": 65520, 00:09:00.996 "method": "nvmf_create_subsystem", 00:09:00.996 "req_id": 1 00:09:00.996 } 00:09:00.996 Got JSON-RPC error response 00:09:00.996 response: 00:09:00.996 { 00:09:00.996 "code": -32602, 00:09:00.996 "message": "Invalid cntlid range [65520-65519]" 00:09:00.996 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:00.996 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20257 -I 0 00:09:00.996 [2024-07-16 01:14:26.964702] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20257: invalid cntlid range [1-0] 00:09:01.254 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:01.254 { 00:09:01.254 "nqn": "nqn.2016-06.io.spdk:cnode20257", 00:09:01.254 "max_cntlid": 0, 00:09:01.254 "method": "nvmf_create_subsystem", 00:09:01.254 "req_id": 1 00:09:01.254 } 00:09:01.254 Got JSON-RPC error response 00:09:01.254 response: 00:09:01.254 { 00:09:01.254 "code": -32602, 00:09:01.254 "message": "Invalid cntlid range [1-0]" 00:09:01.254 }' 00:09:01.254 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:01.254 { 00:09:01.254 "nqn": "nqn.2016-06.io.spdk:cnode20257", 00:09:01.254 "max_cntlid": 0, 00:09:01.254 "method": "nvmf_create_subsystem", 00:09:01.254 "req_id": 1 00:09:01.254 } 00:09:01.254 Got JSON-RPC error response 00:09:01.254 response: 00:09:01.254 { 00:09:01.254 "code": -32602, 00:09:01.254 "message": "Invalid cntlid range [1-0]" 00:09:01.254 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:01.254 01:14:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3062 -I 65520 00:09:01.254 [2024-07-16 01:14:27.145325] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3062: invalid cntlid range [1-65520] 00:09:01.254 01:14:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:01.254 { 00:09:01.254 "nqn": "nqn.2016-06.io.spdk:cnode3062", 00:09:01.254 "max_cntlid": 65520, 00:09:01.254 "method": "nvmf_create_subsystem", 00:09:01.254 "req_id": 1 00:09:01.254 } 00:09:01.254 Got JSON-RPC error response 00:09:01.254 response: 00:09:01.254 { 00:09:01.254 "code": -32602, 00:09:01.254 "message": "Invalid cntlid range [1-65520]" 00:09:01.254 }' 00:09:01.254 01:14:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:01.254 { 00:09:01.254 "nqn": "nqn.2016-06.io.spdk:cnode3062", 00:09:01.254 "max_cntlid": 65520, 00:09:01.254 "method": "nvmf_create_subsystem", 00:09:01.254 "req_id": 1 00:09:01.254 } 00:09:01.254 Got JSON-RPC error response 00:09:01.254 response: 00:09:01.254 { 00:09:01.254 "code": -32602, 00:09:01.254 "message": "Invalid cntlid range [1-65520]" 00:09:01.254 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:01.254 01:14:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23523 -i 6 -I 5 00:09:01.513 [2024-07-16 01:14:27.317915] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23523: invalid cntlid range [6-5] 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:01.513 { 00:09:01.513 "nqn": "nqn.2016-06.io.spdk:cnode23523", 00:09:01.513 "min_cntlid": 6, 00:09:01.513 "max_cntlid": 5, 00:09:01.513 "method": "nvmf_create_subsystem", 00:09:01.513 "req_id": 1 00:09:01.513 } 00:09:01.513 Got JSON-RPC error response 00:09:01.513 response: 00:09:01.513 { 00:09:01.513 "code": -32602, 00:09:01.513 "message": "Invalid cntlid range [6-5]" 00:09:01.513 }' 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:01.513 { 00:09:01.513 "nqn": "nqn.2016-06.io.spdk:cnode23523", 00:09:01.513 "min_cntlid": 6, 00:09:01.513 "max_cntlid": 5, 00:09:01.513 "method": "nvmf_create_subsystem", 00:09:01.513 "req_id": 1 00:09:01.513 } 00:09:01.513 Got JSON-RPC error response 00:09:01.513 response: 00:09:01.513 { 00:09:01.513 "code": -32602, 00:09:01.513 "message": "Invalid cntlid range [6-5]" 00:09:01.513 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:01.513 { 00:09:01.513 "name": "foobar", 00:09:01.513 "method": "nvmf_delete_target", 00:09:01.513 "req_id": 1 00:09:01.513 } 00:09:01.513 Got JSON-RPC error response 00:09:01.513 response: 00:09:01.513 { 00:09:01.513 "code": -32602, 00:09:01.513 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:01.513 }' 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:01.513 { 00:09:01.513 "name": "foobar", 00:09:01.513 "method": "nvmf_delete_target", 00:09:01.513 "req_id": 1 00:09:01.513 } 00:09:01.513 Got JSON-RPC error response 00:09:01.513 response: 00:09:01.513 { 00:09:01.513 "code": -32602, 00:09:01.513 "message": "The specified target doesn't exist, cannot delete it." 00:09:01.513 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.513 rmmod nvme_tcp 00:09:01.513 rmmod nvme_fabrics 00:09:01.513 rmmod nvme_keyring 00:09:01.513 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3267310 ']' 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3267310 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3267310 ']' 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3267310 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3267310 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3267310' 00:09:01.772 killing process with pid 3267310 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3267310 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3267310 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.772 01:14:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.355 01:14:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:04.355 00:09:04.355 real 0m11.534s 00:09:04.355 user 0m19.268s 00:09:04.355 sys 0m4.867s 00:09:04.355 01:14:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:04.355 01:14:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:04.355 ************************************ 00:09:04.355 END TEST nvmf_invalid 00:09:04.355 ************************************ 00:09:04.355 01:14:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:04.355 01:14:29 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:04.355 01:14:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:04.355 01:14:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.355 01:14:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:04.355 ************************************ 00:09:04.355 START TEST nvmf_abort 00:09:04.355 ************************************ 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:04.355 * Looking for test storage... 00:09:04.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:04.355 01:14:29 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:04.356 01:14:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.622 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:09.623 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:09.623 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:09.623 Found net devices under 0000:86:00.0: cvl_0_0 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:09.623 Found net devices under 0000:86:00.1: cvl_0_1 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:09:09.623 00:09:09.623 --- 10.0.0.2 ping statistics --- 00:09:09.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.623 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:09:09.623 00:09:09.623 --- 10.0.0.1 ping statistics --- 00:09:09.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.623 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3271515 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3271515 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3271515 ']' 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:09.623 01:14:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:09.623 [2024-07-16 01:14:35.598502] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:09:09.623 [2024-07-16 01:14:35.598547] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.882 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.882 [2024-07-16 01:14:35.657375] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.882 [2024-07-16 01:14:35.737593] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.882 [2024-07-16 01:14:35.737628] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.882 [2024-07-16 01:14:35.737635] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.882 [2024-07-16 01:14:35.737641] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.882 [2024-07-16 01:14:35.737646] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.882 [2024-07-16 01:14:35.737752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.882 [2024-07-16 01:14:35.737791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.882 [2024-07-16 01:14:35.737797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.448 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.448 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:10.448 01:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.448 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.448 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.448 01:14:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.448 01:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:10.448 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.448 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.706 [2024-07-16 01:14:36.438218] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.706 Malloc0 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.706 Delay0 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.706 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.707 [2024-07-16 01:14:36.510009] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.707 01:14:36 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:10.707 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.707 [2024-07-16 01:14:36.615886] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:13.238 Initializing NVMe Controllers 00:09:13.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:13.238 controller IO queue size 128 less than required 00:09:13.238 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:13.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:13.238 Initialization complete. Launching workers. 00:09:13.238 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 44433 00:09:13.238 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 44494, failed to submit 62 00:09:13.238 success 44437, unsuccess 57, failed 0 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.238 rmmod nvme_tcp 00:09:13.238 rmmod nvme_fabrics 00:09:13.238 rmmod nvme_keyring 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3271515 ']' 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3271515 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3271515 ']' 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3271515 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3271515 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3271515' 00:09:13.238 killing process with pid 3271515 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3271515 00:09:13.238 01:14:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3271515 00:09:13.238 01:14:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.238 01:14:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.238 01:14:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.238 01:14:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.238 01:14:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.238 01:14:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.238 01:14:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.238 01:14:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.141 01:14:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:15.141 00:09:15.141 real 0m11.206s 00:09:15.141 user 0m13.038s 00:09:15.141 sys 0m5.106s 00:09:15.141 01:14:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.141 01:14:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.141 ************************************ 00:09:15.141 END TEST nvmf_abort 00:09:15.141 ************************************ 00:09:15.141 01:14:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:15.141 01:14:41 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:15.141 01:14:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:15.141 01:14:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.141 01:14:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:15.400 ************************************ 00:09:15.400 START TEST nvmf_ns_hotplug_stress 00:09:15.400 ************************************ 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:15.400 * Looking for test storage... 00:09:15.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.400 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:15.401 01:14:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:20.684 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:20.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:20.684 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:20.685 Found net devices under 0000:86:00.0: cvl_0_0 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:20.685 Found net devices under 0000:86:00.1: cvl_0_1 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.685 01:14:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:09:20.685 00:09:20.685 --- 10.0.0.2 ping statistics --- 00:09:20.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.685 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:09:20.685 00:09:20.685 --- 10.0.0.1 ping statistics --- 00:09:20.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.685 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3275510 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3275510 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3275510 ']' 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.685 01:14:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.685 [2024-07-16 01:14:46.295688] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:09:20.685 [2024-07-16 01:14:46.295730] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.685 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.685 [2024-07-16 01:14:46.353291] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.685 [2024-07-16 01:14:46.430951] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.685 [2024-07-16 01:14:46.430985] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.685 [2024-07-16 01:14:46.430992] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.685 [2024-07-16 01:14:46.430998] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.685 [2024-07-16 01:14:46.431007] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.685 [2024-07-16 01:14:46.431107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.685 [2024-07-16 01:14:46.431127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.685 [2024-07-16 01:14:46.431128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.332 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.332 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:21.332 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.332 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.332 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:21.332 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.332 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:21.332 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:21.332 [2024-07-16 01:14:47.295300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.589 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:21.589 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.847 [2024-07-16 01:14:47.656585] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.847 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.105 01:14:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:22.105 Malloc0 00:09:22.105 01:14:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:22.363 Delay0 00:09:22.363 01:14:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.621 01:14:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:22.621 NULL1 00:09:22.621 01:14:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:22.878 01:14:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:22.878 01:14:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3275999 00:09:22.878 01:14:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:22.878 01:14:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.878 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.136 01:14:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.136 Read completed with error (sct=0, sc=11) 00:09:23.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.394 01:14:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:23.394 01:14:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:23.394 true 00:09:23.394 01:14:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:23.394 01:14:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.328 01:14:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.586 01:14:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:24.586 01:14:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:24.586 true 00:09:24.586 01:14:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:24.586 01:14:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.844 01:14:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.101 01:14:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:25.101 01:14:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:25.101 true 00:09:25.101 01:14:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:25.101 01:14:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.476 01:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.476 01:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:26.476 01:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:26.734 true 00:09:26.734 01:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:26.734 01:14:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.668 01:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.668 01:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:27.668 01:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:27.926 true 00:09:27.926 01:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:27.926 01:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.184 01:14:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.184 01:14:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:28.184 01:14:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:28.442 true 00:09:28.442 01:14:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:28.442 01:14:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.818 01:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.818 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:29.818 01:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:29.818 01:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:30.076 true 00:09:30.076 01:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:30.076 01:14:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.012 01:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:31.012 01:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:31.012 01:14:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:31.270 true 00:09:31.270 01:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:31.270 01:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.270 01:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.528 01:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:31.528 01:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:31.785 true 00:09:31.785 01:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:31.785 01:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.785 01:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.044 01:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:32.044 01:14:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:32.302 true 00:09:32.302 01:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:32.302 01:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.302 01:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.561 01:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:32.561 01:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:32.819 true 00:09:32.819 01:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:32.819 01:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.819 01:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.078 01:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:33.078 01:14:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:33.337 true 00:09:33.337 01:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:33.337 01:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.337 01:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.596 01:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:33.596 01:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:33.863 true 00:09:33.863 01:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:33.863 01:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.121 01:14:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.121 01:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:34.121 01:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:34.380 true 00:09:34.380 01:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:34.380 01:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.639 01:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.639 01:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:34.639 01:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:34.897 true 00:09:34.897 01:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:34.897 01:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.156 01:15:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.414 01:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:35.414 01:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:35.414 true 00:09:35.414 01:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:35.414 01:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.673 01:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.932 01:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:35.932 01:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:35.932 true 00:09:35.932 01:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:35.932 01:15:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.190 01:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.448 01:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:36.448 01:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:36.448 true 00:09:36.707 01:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:36.707 01:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.707 01:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.966 01:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:36.966 01:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:37.224 true 00:09:37.224 01:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:37.224 01:15:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.224 01:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.480 01:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:37.480 01:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:37.738 true 00:09:37.738 01:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:37.738 01:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.738 01:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.996 01:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:37.996 01:15:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:38.254 true 00:09:38.254 01:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:38.254 01:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.511 01:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:38.511 01:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:38.511 01:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:38.770 true 00:09:38.770 01:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:38.770 01:15:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.705 01:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.705 01:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:39.705 01:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:39.964 true 00:09:39.964 01:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:39.964 01:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.222 01:15:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.222 01:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:40.222 01:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:40.481 true 00:09:40.481 01:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:40.481 01:15:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.858 01:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.858 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:41.858 01:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:41.858 01:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:42.116 true 00:09:42.116 01:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:42.116 01:15:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.116 01:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.375 01:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:42.375 01:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:42.633 true 00:09:42.633 01:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:42.633 01:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.891 01:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.891 01:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:42.891 01:15:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:43.149 true 00:09:43.149 01:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:43.149 01:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.406 01:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.668 01:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:43.668 01:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:43.668 true 00:09:43.668 01:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:43.668 01:15:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.087 01:15:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.087 01:15:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:45.087 01:15:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:45.087 true 00:09:45.345 01:15:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:45.345 01:15:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.278 01:15:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.278 01:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:46.278 01:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:46.278 true 00:09:46.278 01:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:46.278 01:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.536 01:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.795 01:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:46.795 01:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:46.795 true 00:09:47.055 01:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:47.055 01:15:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.989 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.989 01:15:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.989 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.247 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.247 01:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:48.247 01:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:48.506 true 00:09:48.506 01:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:48.506 01:15:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.470 01:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.470 01:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:49.470 01:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:49.728 true 00:09:49.728 01:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:49.728 01:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.728 01:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.987 01:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:09:49.987 01:15:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:50.245 true 00:09:50.245 01:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:50.245 01:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.503 01:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:50.503 01:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:09:50.503 01:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:50.761 true 00:09:50.761 01:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:50.761 01:15:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.697 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.697 01:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.697 01:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:09:51.697 01:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:51.955 true 00:09:51.955 01:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:51.955 01:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.214 01:15:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.214 01:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:09:52.214 01:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:52.472 true 00:09:52.472 01:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:52.472 01:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.731 01:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.988 01:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:09:52.988 01:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:52.988 true 00:09:52.988 01:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:52.988 01:15:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.246 Initializing NVMe Controllers 00:09:53.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.246 Controller IO queue size 128, less than required. 00:09:53.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:53.246 Controller IO queue size 128, less than required. 00:09:53.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:53.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:53.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:53.246 Initialization complete. Launching workers. 00:09:53.246 ======================================================== 00:09:53.246 Latency(us) 00:09:53.246 Device Information : IOPS MiB/s Average min max 00:09:53.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1480.96 0.72 40366.38 2177.76 1084622.01 00:09:53.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12476.08 6.09 10252.16 2515.96 447830.64 00:09:53.246 ======================================================== 00:09:53.246 Total : 13957.04 6.81 13447.53 2177.76 1084622.01 00:09:53.246 00:09:53.246 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.502 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:09:53.502 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:53.502 true 00:09:53.502 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3275999 00:09:53.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3275999) - No such process 00:09:53.502 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3275999 00:09:53.502 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.760 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:54.017 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:54.017 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:54.017 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:54.017 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:54.017 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:54.017 null0 00:09:54.017 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:54.017 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:54.017 01:15:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:54.275 null1 00:09:54.275 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:54.275 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:54.275 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:54.532 null2 00:09:54.532 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:54.532 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:54.532 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:54.532 null3 00:09:54.533 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:54.533 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:54.533 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:54.790 null4 00:09:54.790 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:54.790 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:54.790 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:55.048 null5 00:09:55.048 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:55.048 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:55.048 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:55.048 null6 00:09:55.048 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:55.048 01:15:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:55.048 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:55.307 null7 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3281922 3281925 3281926 3281930 3281932 3281935 3281937 3281940 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.307 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:55.566 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.566 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:55.566 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:55.566 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:55.566 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:55.566 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:55.566 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:55.566 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:55.825 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.084 01:15:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.342 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:56.601 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.601 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:56.601 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:56.601 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:56.601 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:56.601 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:56.601 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:56.601 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:56.859 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.859 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.859 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:56.859 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.859 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.859 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:56.860 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.118 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:57.118 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:57.118 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:57.118 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:57.118 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:57.118 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:57.118 01:15:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:57.118 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.119 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.119 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:57.119 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.119 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.119 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:57.119 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.119 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.119 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.119 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.119 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:57.119 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:57.376 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.376 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:57.376 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:57.376 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:57.376 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:57.376 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:57.376 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:57.376 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:57.634 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.892 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:58.151 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:58.151 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.151 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:58.151 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:58.151 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:58.151 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:58.151 01:15:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:58.151 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.410 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.411 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:58.411 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:58.411 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:58.411 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.411 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:58.411 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:58.411 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:58.411 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:58.411 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.669 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:58.927 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:58.927 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:58.927 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:58.927 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.927 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:58.927 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:58.927 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:58.927 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:58.927 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.928 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.186 rmmod nvme_tcp 00:09:59.186 rmmod nvme_fabrics 00:09:59.186 rmmod nvme_keyring 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3275510 ']' 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3275510 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3275510 ']' 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3275510 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:59.186 01:15:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3275510 00:09:59.186 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:59.186 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:59.186 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3275510' 00:09:59.186 killing process with pid 3275510 00:09:59.186 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3275510 00:09:59.186 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3275510 00:09:59.444 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:59.444 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:59.444 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:59.444 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:59.444 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:59.444 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.444 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.444 01:15:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.380 01:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:01.380 00:10:01.380 real 0m46.149s 00:10:01.380 user 3m13.025s 00:10:01.380 sys 0m14.368s 00:10:01.380 01:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.380 01:15:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:01.380 ************************************ 00:10:01.380 END TEST nvmf_ns_hotplug_stress 00:10:01.380 ************************************ 00:10:01.380 01:15:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:01.380 01:15:27 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:01.380 01:15:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:01.380 01:15:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.380 01:15:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:01.637 ************************************ 00:10:01.637 START TEST nvmf_connect_stress 00:10:01.637 ************************************ 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:01.637 * Looking for test storage... 00:10:01.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:01.637 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:01.638 01:15:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:06.906 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:06.906 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:06.906 Found net devices under 0000:86:00.0: cvl_0_0 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:06.906 Found net devices under 0000:86:00.1: cvl_0_1 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:10:06.906 00:10:06.906 --- 10.0.0.2 ping statistics --- 00:10:06.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.906 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:10:06.906 00:10:06.906 --- 10.0.0.1 ping statistics --- 00:10:06.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.906 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.906 01:15:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.907 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3286274 00:10:06.907 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:06.907 01:15:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3286274 00:10:06.907 01:15:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3286274 ']' 00:10:06.907 01:15:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.907 01:15:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.907 01:15:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.907 01:15:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.907 01:15:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.907 [2024-07-16 01:15:32.693266] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:10:06.907 [2024-07-16 01:15:32.693308] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.907 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.907 [2024-07-16 01:15:32.750569] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:06.907 [2024-07-16 01:15:32.828788] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.907 [2024-07-16 01:15:32.828821] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.907 [2024-07-16 01:15:32.828831] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.907 [2024-07-16 01:15:32.828837] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.907 [2024-07-16 01:15:32.828842] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.907 [2024-07-16 01:15:32.828939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.907 [2024-07-16 01:15:32.829045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.907 [2024-07-16 01:15:32.829046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.840 [2024-07-16 01:15:33.537797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.840 [2024-07-16 01:15:33.572417] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.840 NULL1 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3286314 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.840 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.098 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.098 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:08.098 01:15:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.098 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.098 01:15:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.356 01:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.356 01:15:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:08.356 01:15:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.356 01:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.356 01:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.952 01:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.952 01:15:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:08.952 01:15:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.952 01:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.952 01:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.241 01:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.241 01:15:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:09.241 01:15:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.241 01:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.241 01:15:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.500 01:15:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.500 01:15:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:09.500 01:15:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.500 01:15:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.500 01:15:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.759 01:15:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.759 01:15:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:09.759 01:15:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.759 01:15:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.759 01:15:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.026 01:15:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.026 01:15:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:10.026 01:15:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.026 01:15:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.026 01:15:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.284 01:15:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.284 01:15:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:10.284 01:15:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.284 01:15:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.284 01:15:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.852 01:15:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.852 01:15:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:10.852 01:15:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.852 01:15:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.852 01:15:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.111 01:15:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.111 01:15:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:11.111 01:15:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.111 01:15:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.111 01:15:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.369 01:15:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.370 01:15:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:11.370 01:15:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.370 01:15:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.370 01:15:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.628 01:15:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.628 01:15:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:11.628 01:15:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.628 01:15:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.628 01:15:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.887 01:15:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.887 01:15:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:11.887 01:15:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.887 01:15:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.887 01:15:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.455 01:15:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.455 01:15:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:12.455 01:15:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:12.455 01:15:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.455 01:15:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.714 01:15:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.714 01:15:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:12.714 01:15:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:12.714 01:15:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.714 01:15:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.973 01:15:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.973 01:15:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:12.973 01:15:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:12.973 01:15:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.973 01:15:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.231 01:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.231 01:15:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:13.231 01:15:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:13.231 01:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.231 01:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.490 01:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.490 01:15:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:13.490 01:15:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:13.490 01:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.490 01:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.059 01:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.059 01:15:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:14.059 01:15:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:14.059 01:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.059 01:15:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.317 01:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.317 01:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:14.317 01:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:14.317 01:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.317 01:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.576 01:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.576 01:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:14.576 01:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:14.576 01:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.576 01:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.834 01:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.834 01:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:14.834 01:15:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:14.834 01:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.834 01:15:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.402 01:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.402 01:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:15.402 01:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.402 01:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.402 01:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.661 01:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.661 01:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:15.661 01:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.661 01:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.661 01:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.920 01:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.920 01:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:15.920 01:15:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.920 01:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.920 01:15:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.177 01:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.177 01:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:16.177 01:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.177 01:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.177 01:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.435 01:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.435 01:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:16.435 01:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.435 01:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.435 01:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.001 01:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.001 01:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:17.001 01:15:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.001 01:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.001 01:15:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.259 01:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.259 01:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:17.259 01:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.259 01:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.259 01:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.517 01:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.517 01:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:17.517 01:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.517 01:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.517 01:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.775 01:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.775 01:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:17.775 01:15:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:17.775 01:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.775 01:15:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.775 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3286314 00:10:18.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3286314) - No such process 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3286314 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:18.342 rmmod nvme_tcp 00:10:18.342 rmmod nvme_fabrics 00:10:18.342 rmmod nvme_keyring 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3286274 ']' 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3286274 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3286274 ']' 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3286274 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3286274 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3286274' 00:10:18.342 killing process with pid 3286274 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3286274 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3286274 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.342 01:15:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.887 01:15:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:20.887 00:10:20.887 real 0m19.004s 00:10:20.887 user 0m41.742s 00:10:20.887 sys 0m7.875s 00:10:20.887 01:15:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.887 01:15:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.887 ************************************ 00:10:20.887 END TEST nvmf_connect_stress 00:10:20.887 ************************************ 00:10:20.887 01:15:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:20.887 01:15:46 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:20.887 01:15:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:20.887 01:15:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.887 01:15:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:20.887 ************************************ 00:10:20.887 START TEST nvmf_fused_ordering 00:10:20.887 ************************************ 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:20.887 * Looking for test storage... 00:10:20.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.887 01:15:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.888 01:15:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.888 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:20.888 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:20.888 01:15:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:20.888 01:15:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:26.204 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:26.204 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:26.205 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:26.205 Found net devices under 0000:86:00.0: cvl_0_0 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:26.205 Found net devices under 0000:86:00.1: cvl_0_1 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:26.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:10:26.205 00:10:26.205 --- 10.0.0.2 ping statistics --- 00:10:26.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.205 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:10:26.205 00:10:26.205 --- 10.0.0.1 ping statistics --- 00:10:26.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.205 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3291603 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3291603 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3291603 ']' 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.205 01:15:51 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:26.205 [2024-07-16 01:15:51.853583] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:10:26.205 [2024-07-16 01:15:51.853629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.205 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.205 [2024-07-16 01:15:51.913064] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.205 [2024-07-16 01:15:51.991093] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.205 [2024-07-16 01:15:51.991128] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.206 [2024-07-16 01:15:51.991134] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.206 [2024-07-16 01:15:51.991140] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.206 [2024-07-16 01:15:51.991145] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.206 [2024-07-16 01:15:51.991164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:26.773 [2024-07-16 01:15:52.693371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:26.773 [2024-07-16 01:15:52.709504] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:26.773 NULL1 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.773 01:15:52 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:27.032 [2024-07-16 01:15:52.761763] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:10:27.032 [2024-07-16 01:15:52.761794] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3291708 ] 00:10:27.032 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.290 Attached to nqn.2016-06.io.spdk:cnode1 00:10:27.290 Namespace ID: 1 size: 1GB 00:10:27.290 fused_ordering(0) 00:10:27.290 fused_ordering(1) 00:10:27.290 fused_ordering(2) 00:10:27.290 fused_ordering(3) 00:10:27.290 fused_ordering(4) 00:10:27.290 fused_ordering(5) 00:10:27.290 fused_ordering(6) 00:10:27.290 fused_ordering(7) 00:10:27.290 fused_ordering(8) 00:10:27.290 fused_ordering(9) 00:10:27.290 fused_ordering(10) 00:10:27.290 fused_ordering(11) 00:10:27.290 fused_ordering(12) 00:10:27.290 fused_ordering(13) 00:10:27.290 fused_ordering(14) 00:10:27.290 fused_ordering(15) 00:10:27.290 fused_ordering(16) 00:10:27.290 fused_ordering(17) 00:10:27.290 fused_ordering(18) 00:10:27.290 fused_ordering(19) 00:10:27.290 fused_ordering(20) 00:10:27.290 fused_ordering(21) 00:10:27.290 fused_ordering(22) 00:10:27.290 fused_ordering(23) 00:10:27.290 fused_ordering(24) 00:10:27.290 fused_ordering(25) 00:10:27.290 fused_ordering(26) 00:10:27.290 fused_ordering(27) 00:10:27.290 fused_ordering(28) 00:10:27.290 fused_ordering(29) 00:10:27.290 fused_ordering(30) 00:10:27.290 fused_ordering(31) 00:10:27.290 fused_ordering(32) 00:10:27.290 fused_ordering(33) 00:10:27.290 fused_ordering(34) 00:10:27.290 fused_ordering(35) 00:10:27.290 fused_ordering(36) 00:10:27.290 fused_ordering(37) 00:10:27.290 fused_ordering(38) 00:10:27.290 fused_ordering(39) 00:10:27.290 fused_ordering(40) 00:10:27.290 fused_ordering(41) 00:10:27.290 fused_ordering(42) 00:10:27.290 fused_ordering(43) 00:10:27.290 fused_ordering(44) 00:10:27.290 fused_ordering(45) 00:10:27.290 fused_ordering(46) 00:10:27.290 fused_ordering(47) 00:10:27.290 fused_ordering(48) 00:10:27.290 fused_ordering(49) 00:10:27.290 fused_ordering(50) 00:10:27.290 fused_ordering(51) 00:10:27.290 fused_ordering(52) 00:10:27.290 fused_ordering(53) 00:10:27.290 fused_ordering(54) 00:10:27.290 fused_ordering(55) 00:10:27.290 fused_ordering(56) 00:10:27.290 fused_ordering(57) 00:10:27.290 fused_ordering(58) 00:10:27.290 fused_ordering(59) 00:10:27.290 fused_ordering(60) 00:10:27.290 fused_ordering(61) 00:10:27.290 fused_ordering(62) 00:10:27.290 fused_ordering(63) 00:10:27.290 fused_ordering(64) 00:10:27.290 fused_ordering(65) 00:10:27.290 fused_ordering(66) 00:10:27.290 fused_ordering(67) 00:10:27.290 fused_ordering(68) 00:10:27.290 fused_ordering(69) 00:10:27.290 fused_ordering(70) 00:10:27.290 fused_ordering(71) 00:10:27.290 fused_ordering(72) 00:10:27.290 fused_ordering(73) 00:10:27.290 fused_ordering(74) 00:10:27.290 fused_ordering(75) 00:10:27.290 fused_ordering(76) 00:10:27.290 fused_ordering(77) 00:10:27.290 fused_ordering(78) 00:10:27.290 fused_ordering(79) 00:10:27.290 fused_ordering(80) 00:10:27.290 fused_ordering(81) 00:10:27.290 fused_ordering(82) 00:10:27.290 fused_ordering(83) 00:10:27.290 fused_ordering(84) 00:10:27.290 fused_ordering(85) 00:10:27.290 fused_ordering(86) 00:10:27.290 fused_ordering(87) 00:10:27.290 fused_ordering(88) 00:10:27.290 fused_ordering(89) 00:10:27.290 fused_ordering(90) 00:10:27.290 fused_ordering(91) 00:10:27.290 fused_ordering(92) 00:10:27.290 fused_ordering(93) 00:10:27.290 fused_ordering(94) 00:10:27.290 fused_ordering(95) 00:10:27.290 fused_ordering(96) 00:10:27.290 fused_ordering(97) 00:10:27.290 fused_ordering(98) 00:10:27.290 fused_ordering(99) 00:10:27.290 fused_ordering(100) 00:10:27.290 fused_ordering(101) 00:10:27.290 fused_ordering(102) 00:10:27.290 fused_ordering(103) 00:10:27.290 fused_ordering(104) 00:10:27.290 fused_ordering(105) 00:10:27.290 fused_ordering(106) 00:10:27.290 fused_ordering(107) 00:10:27.290 fused_ordering(108) 00:10:27.290 fused_ordering(109) 00:10:27.290 fused_ordering(110) 00:10:27.290 fused_ordering(111) 00:10:27.290 fused_ordering(112) 00:10:27.290 fused_ordering(113) 00:10:27.290 fused_ordering(114) 00:10:27.290 fused_ordering(115) 00:10:27.290 fused_ordering(116) 00:10:27.290 fused_ordering(117) 00:10:27.290 fused_ordering(118) 00:10:27.290 fused_ordering(119) 00:10:27.290 fused_ordering(120) 00:10:27.290 fused_ordering(121) 00:10:27.290 fused_ordering(122) 00:10:27.290 fused_ordering(123) 00:10:27.290 fused_ordering(124) 00:10:27.290 fused_ordering(125) 00:10:27.290 fused_ordering(126) 00:10:27.290 fused_ordering(127) 00:10:27.290 fused_ordering(128) 00:10:27.290 fused_ordering(129) 00:10:27.290 fused_ordering(130) 00:10:27.290 fused_ordering(131) 00:10:27.290 fused_ordering(132) 00:10:27.290 fused_ordering(133) 00:10:27.290 fused_ordering(134) 00:10:27.290 fused_ordering(135) 00:10:27.290 fused_ordering(136) 00:10:27.290 fused_ordering(137) 00:10:27.290 fused_ordering(138) 00:10:27.290 fused_ordering(139) 00:10:27.290 fused_ordering(140) 00:10:27.290 fused_ordering(141) 00:10:27.290 fused_ordering(142) 00:10:27.290 fused_ordering(143) 00:10:27.290 fused_ordering(144) 00:10:27.290 fused_ordering(145) 00:10:27.290 fused_ordering(146) 00:10:27.290 fused_ordering(147) 00:10:27.290 fused_ordering(148) 00:10:27.290 fused_ordering(149) 00:10:27.290 fused_ordering(150) 00:10:27.290 fused_ordering(151) 00:10:27.290 fused_ordering(152) 00:10:27.290 fused_ordering(153) 00:10:27.290 fused_ordering(154) 00:10:27.290 fused_ordering(155) 00:10:27.290 fused_ordering(156) 00:10:27.290 fused_ordering(157) 00:10:27.290 fused_ordering(158) 00:10:27.290 fused_ordering(159) 00:10:27.290 fused_ordering(160) 00:10:27.290 fused_ordering(161) 00:10:27.290 fused_ordering(162) 00:10:27.290 fused_ordering(163) 00:10:27.290 fused_ordering(164) 00:10:27.290 fused_ordering(165) 00:10:27.290 fused_ordering(166) 00:10:27.290 fused_ordering(167) 00:10:27.290 fused_ordering(168) 00:10:27.290 fused_ordering(169) 00:10:27.290 fused_ordering(170) 00:10:27.290 fused_ordering(171) 00:10:27.290 fused_ordering(172) 00:10:27.290 fused_ordering(173) 00:10:27.290 fused_ordering(174) 00:10:27.290 fused_ordering(175) 00:10:27.290 fused_ordering(176) 00:10:27.290 fused_ordering(177) 00:10:27.290 fused_ordering(178) 00:10:27.290 fused_ordering(179) 00:10:27.290 fused_ordering(180) 00:10:27.290 fused_ordering(181) 00:10:27.290 fused_ordering(182) 00:10:27.290 fused_ordering(183) 00:10:27.290 fused_ordering(184) 00:10:27.290 fused_ordering(185) 00:10:27.290 fused_ordering(186) 00:10:27.290 fused_ordering(187) 00:10:27.290 fused_ordering(188) 00:10:27.290 fused_ordering(189) 00:10:27.290 fused_ordering(190) 00:10:27.290 fused_ordering(191) 00:10:27.290 fused_ordering(192) 00:10:27.290 fused_ordering(193) 00:10:27.290 fused_ordering(194) 00:10:27.290 fused_ordering(195) 00:10:27.290 fused_ordering(196) 00:10:27.290 fused_ordering(197) 00:10:27.290 fused_ordering(198) 00:10:27.290 fused_ordering(199) 00:10:27.290 fused_ordering(200) 00:10:27.290 fused_ordering(201) 00:10:27.290 fused_ordering(202) 00:10:27.290 fused_ordering(203) 00:10:27.290 fused_ordering(204) 00:10:27.290 fused_ordering(205) 00:10:27.548 fused_ordering(206) 00:10:27.548 fused_ordering(207) 00:10:27.548 fused_ordering(208) 00:10:27.548 fused_ordering(209) 00:10:27.548 fused_ordering(210) 00:10:27.548 fused_ordering(211) 00:10:27.548 fused_ordering(212) 00:10:27.548 fused_ordering(213) 00:10:27.548 fused_ordering(214) 00:10:27.549 fused_ordering(215) 00:10:27.549 fused_ordering(216) 00:10:27.549 fused_ordering(217) 00:10:27.549 fused_ordering(218) 00:10:27.549 fused_ordering(219) 00:10:27.549 fused_ordering(220) 00:10:27.549 fused_ordering(221) 00:10:27.549 fused_ordering(222) 00:10:27.549 fused_ordering(223) 00:10:27.549 fused_ordering(224) 00:10:27.549 fused_ordering(225) 00:10:27.549 fused_ordering(226) 00:10:27.549 fused_ordering(227) 00:10:27.549 fused_ordering(228) 00:10:27.549 fused_ordering(229) 00:10:27.549 fused_ordering(230) 00:10:27.549 fused_ordering(231) 00:10:27.549 fused_ordering(232) 00:10:27.549 fused_ordering(233) 00:10:27.549 fused_ordering(234) 00:10:27.549 fused_ordering(235) 00:10:27.549 fused_ordering(236) 00:10:27.549 fused_ordering(237) 00:10:27.549 fused_ordering(238) 00:10:27.549 fused_ordering(239) 00:10:27.549 fused_ordering(240) 00:10:27.549 fused_ordering(241) 00:10:27.549 fused_ordering(242) 00:10:27.549 fused_ordering(243) 00:10:27.549 fused_ordering(244) 00:10:27.549 fused_ordering(245) 00:10:27.549 fused_ordering(246) 00:10:27.549 fused_ordering(247) 00:10:27.549 fused_ordering(248) 00:10:27.549 fused_ordering(249) 00:10:27.549 fused_ordering(250) 00:10:27.549 fused_ordering(251) 00:10:27.549 fused_ordering(252) 00:10:27.549 fused_ordering(253) 00:10:27.549 fused_ordering(254) 00:10:27.549 fused_ordering(255) 00:10:27.549 fused_ordering(256) 00:10:27.549 fused_ordering(257) 00:10:27.549 fused_ordering(258) 00:10:27.549 fused_ordering(259) 00:10:27.549 fused_ordering(260) 00:10:27.549 fused_ordering(261) 00:10:27.549 fused_ordering(262) 00:10:27.549 fused_ordering(263) 00:10:27.549 fused_ordering(264) 00:10:27.549 fused_ordering(265) 00:10:27.549 fused_ordering(266) 00:10:27.549 fused_ordering(267) 00:10:27.549 fused_ordering(268) 00:10:27.549 fused_ordering(269) 00:10:27.549 fused_ordering(270) 00:10:27.549 fused_ordering(271) 00:10:27.549 fused_ordering(272) 00:10:27.549 fused_ordering(273) 00:10:27.549 fused_ordering(274) 00:10:27.549 fused_ordering(275) 00:10:27.549 fused_ordering(276) 00:10:27.549 fused_ordering(277) 00:10:27.549 fused_ordering(278) 00:10:27.549 fused_ordering(279) 00:10:27.549 fused_ordering(280) 00:10:27.549 fused_ordering(281) 00:10:27.549 fused_ordering(282) 00:10:27.549 fused_ordering(283) 00:10:27.549 fused_ordering(284) 00:10:27.549 fused_ordering(285) 00:10:27.549 fused_ordering(286) 00:10:27.549 fused_ordering(287) 00:10:27.549 fused_ordering(288) 00:10:27.549 fused_ordering(289) 00:10:27.549 fused_ordering(290) 00:10:27.549 fused_ordering(291) 00:10:27.549 fused_ordering(292) 00:10:27.549 fused_ordering(293) 00:10:27.549 fused_ordering(294) 00:10:27.549 fused_ordering(295) 00:10:27.549 fused_ordering(296) 00:10:27.549 fused_ordering(297) 00:10:27.549 fused_ordering(298) 00:10:27.549 fused_ordering(299) 00:10:27.549 fused_ordering(300) 00:10:27.549 fused_ordering(301) 00:10:27.549 fused_ordering(302) 00:10:27.549 fused_ordering(303) 00:10:27.549 fused_ordering(304) 00:10:27.549 fused_ordering(305) 00:10:27.549 fused_ordering(306) 00:10:27.549 fused_ordering(307) 00:10:27.549 fused_ordering(308) 00:10:27.549 fused_ordering(309) 00:10:27.549 fused_ordering(310) 00:10:27.549 fused_ordering(311) 00:10:27.549 fused_ordering(312) 00:10:27.549 fused_ordering(313) 00:10:27.549 fused_ordering(314) 00:10:27.549 fused_ordering(315) 00:10:27.549 fused_ordering(316) 00:10:27.549 fused_ordering(317) 00:10:27.549 fused_ordering(318) 00:10:27.549 fused_ordering(319) 00:10:27.549 fused_ordering(320) 00:10:27.549 fused_ordering(321) 00:10:27.549 fused_ordering(322) 00:10:27.549 fused_ordering(323) 00:10:27.549 fused_ordering(324) 00:10:27.549 fused_ordering(325) 00:10:27.549 fused_ordering(326) 00:10:27.549 fused_ordering(327) 00:10:27.549 fused_ordering(328) 00:10:27.549 fused_ordering(329) 00:10:27.549 fused_ordering(330) 00:10:27.549 fused_ordering(331) 00:10:27.549 fused_ordering(332) 00:10:27.549 fused_ordering(333) 00:10:27.549 fused_ordering(334) 00:10:27.549 fused_ordering(335) 00:10:27.549 fused_ordering(336) 00:10:27.549 fused_ordering(337) 00:10:27.549 fused_ordering(338) 00:10:27.549 fused_ordering(339) 00:10:27.549 fused_ordering(340) 00:10:27.549 fused_ordering(341) 00:10:27.549 fused_ordering(342) 00:10:27.549 fused_ordering(343) 00:10:27.549 fused_ordering(344) 00:10:27.549 fused_ordering(345) 00:10:27.549 fused_ordering(346) 00:10:27.549 fused_ordering(347) 00:10:27.549 fused_ordering(348) 00:10:27.549 fused_ordering(349) 00:10:27.549 fused_ordering(350) 00:10:27.549 fused_ordering(351) 00:10:27.549 fused_ordering(352) 00:10:27.549 fused_ordering(353) 00:10:27.549 fused_ordering(354) 00:10:27.549 fused_ordering(355) 00:10:27.549 fused_ordering(356) 00:10:27.549 fused_ordering(357) 00:10:27.549 fused_ordering(358) 00:10:27.549 fused_ordering(359) 00:10:27.549 fused_ordering(360) 00:10:27.549 fused_ordering(361) 00:10:27.549 fused_ordering(362) 00:10:27.549 fused_ordering(363) 00:10:27.549 fused_ordering(364) 00:10:27.549 fused_ordering(365) 00:10:27.549 fused_ordering(366) 00:10:27.549 fused_ordering(367) 00:10:27.549 fused_ordering(368) 00:10:27.549 fused_ordering(369) 00:10:27.549 fused_ordering(370) 00:10:27.549 fused_ordering(371) 00:10:27.549 fused_ordering(372) 00:10:27.549 fused_ordering(373) 00:10:27.549 fused_ordering(374) 00:10:27.549 fused_ordering(375) 00:10:27.549 fused_ordering(376) 00:10:27.549 fused_ordering(377) 00:10:27.549 fused_ordering(378) 00:10:27.549 fused_ordering(379) 00:10:27.550 fused_ordering(380) 00:10:27.550 fused_ordering(381) 00:10:27.550 fused_ordering(382) 00:10:27.550 fused_ordering(383) 00:10:27.550 fused_ordering(384) 00:10:27.550 fused_ordering(385) 00:10:27.550 fused_ordering(386) 00:10:27.550 fused_ordering(387) 00:10:27.550 fused_ordering(388) 00:10:27.550 fused_ordering(389) 00:10:27.550 fused_ordering(390) 00:10:27.550 fused_ordering(391) 00:10:27.550 fused_ordering(392) 00:10:27.550 fused_ordering(393) 00:10:27.550 fused_ordering(394) 00:10:27.550 fused_ordering(395) 00:10:27.550 fused_ordering(396) 00:10:27.550 fused_ordering(397) 00:10:27.550 fused_ordering(398) 00:10:27.550 fused_ordering(399) 00:10:27.550 fused_ordering(400) 00:10:27.550 fused_ordering(401) 00:10:27.550 fused_ordering(402) 00:10:27.550 fused_ordering(403) 00:10:27.550 fused_ordering(404) 00:10:27.550 fused_ordering(405) 00:10:27.550 fused_ordering(406) 00:10:27.550 fused_ordering(407) 00:10:27.550 fused_ordering(408) 00:10:27.550 fused_ordering(409) 00:10:27.550 fused_ordering(410) 00:10:27.808 fused_ordering(411) 00:10:27.808 fused_ordering(412) 00:10:27.808 fused_ordering(413) 00:10:27.808 fused_ordering(414) 00:10:27.808 fused_ordering(415) 00:10:27.808 fused_ordering(416) 00:10:27.808 fused_ordering(417) 00:10:27.808 fused_ordering(418) 00:10:27.808 fused_ordering(419) 00:10:27.808 fused_ordering(420) 00:10:27.808 fused_ordering(421) 00:10:27.808 fused_ordering(422) 00:10:27.808 fused_ordering(423) 00:10:27.808 fused_ordering(424) 00:10:27.808 fused_ordering(425) 00:10:27.808 fused_ordering(426) 00:10:27.808 fused_ordering(427) 00:10:27.808 fused_ordering(428) 00:10:27.808 fused_ordering(429) 00:10:27.808 fused_ordering(430) 00:10:27.808 fused_ordering(431) 00:10:27.808 fused_ordering(432) 00:10:27.808 fused_ordering(433) 00:10:27.808 fused_ordering(434) 00:10:27.808 fused_ordering(435) 00:10:27.808 fused_ordering(436) 00:10:27.808 fused_ordering(437) 00:10:27.808 fused_ordering(438) 00:10:27.808 fused_ordering(439) 00:10:27.808 fused_ordering(440) 00:10:27.808 fused_ordering(441) 00:10:27.808 fused_ordering(442) 00:10:27.808 fused_ordering(443) 00:10:27.808 fused_ordering(444) 00:10:27.808 fused_ordering(445) 00:10:27.808 fused_ordering(446) 00:10:27.808 fused_ordering(447) 00:10:27.808 fused_ordering(448) 00:10:27.808 fused_ordering(449) 00:10:27.808 fused_ordering(450) 00:10:27.808 fused_ordering(451) 00:10:27.808 fused_ordering(452) 00:10:27.808 fused_ordering(453) 00:10:27.808 fused_ordering(454) 00:10:27.808 fused_ordering(455) 00:10:27.808 fused_ordering(456) 00:10:27.808 fused_ordering(457) 00:10:27.808 fused_ordering(458) 00:10:27.808 fused_ordering(459) 00:10:27.808 fused_ordering(460) 00:10:27.808 fused_ordering(461) 00:10:27.808 fused_ordering(462) 00:10:27.808 fused_ordering(463) 00:10:27.808 fused_ordering(464) 00:10:27.808 fused_ordering(465) 00:10:27.809 fused_ordering(466) 00:10:27.809 fused_ordering(467) 00:10:27.809 fused_ordering(468) 00:10:27.809 fused_ordering(469) 00:10:27.809 fused_ordering(470) 00:10:27.809 fused_ordering(471) 00:10:27.809 fused_ordering(472) 00:10:27.809 fused_ordering(473) 00:10:27.809 fused_ordering(474) 00:10:27.809 fused_ordering(475) 00:10:27.809 fused_ordering(476) 00:10:27.809 fused_ordering(477) 00:10:27.809 fused_ordering(478) 00:10:27.809 fused_ordering(479) 00:10:27.809 fused_ordering(480) 00:10:27.809 fused_ordering(481) 00:10:27.809 fused_ordering(482) 00:10:27.809 fused_ordering(483) 00:10:27.809 fused_ordering(484) 00:10:27.809 fused_ordering(485) 00:10:27.809 fused_ordering(486) 00:10:27.809 fused_ordering(487) 00:10:27.809 fused_ordering(488) 00:10:27.809 fused_ordering(489) 00:10:27.809 fused_ordering(490) 00:10:27.809 fused_ordering(491) 00:10:27.809 fused_ordering(492) 00:10:27.809 fused_ordering(493) 00:10:27.809 fused_ordering(494) 00:10:27.809 fused_ordering(495) 00:10:27.809 fused_ordering(496) 00:10:27.809 fused_ordering(497) 00:10:27.809 fused_ordering(498) 00:10:27.809 fused_ordering(499) 00:10:27.809 fused_ordering(500) 00:10:27.809 fused_ordering(501) 00:10:27.809 fused_ordering(502) 00:10:27.809 fused_ordering(503) 00:10:27.809 fused_ordering(504) 00:10:27.809 fused_ordering(505) 00:10:27.809 fused_ordering(506) 00:10:27.809 fused_ordering(507) 00:10:27.809 fused_ordering(508) 00:10:27.809 fused_ordering(509) 00:10:27.809 fused_ordering(510) 00:10:27.809 fused_ordering(511) 00:10:27.809 fused_ordering(512) 00:10:27.809 fused_ordering(513) 00:10:27.809 fused_ordering(514) 00:10:27.809 fused_ordering(515) 00:10:27.809 fused_ordering(516) 00:10:27.809 fused_ordering(517) 00:10:27.809 fused_ordering(518) 00:10:27.809 fused_ordering(519) 00:10:27.809 fused_ordering(520) 00:10:27.809 fused_ordering(521) 00:10:27.809 fused_ordering(522) 00:10:27.809 fused_ordering(523) 00:10:27.809 fused_ordering(524) 00:10:27.809 fused_ordering(525) 00:10:27.809 fused_ordering(526) 00:10:27.809 fused_ordering(527) 00:10:27.809 fused_ordering(528) 00:10:27.809 fused_ordering(529) 00:10:27.809 fused_ordering(530) 00:10:27.809 fused_ordering(531) 00:10:27.809 fused_ordering(532) 00:10:27.809 fused_ordering(533) 00:10:27.809 fused_ordering(534) 00:10:27.809 fused_ordering(535) 00:10:27.809 fused_ordering(536) 00:10:27.809 fused_ordering(537) 00:10:27.809 fused_ordering(538) 00:10:27.809 fused_ordering(539) 00:10:27.809 fused_ordering(540) 00:10:27.809 fused_ordering(541) 00:10:27.809 fused_ordering(542) 00:10:27.809 fused_ordering(543) 00:10:27.809 fused_ordering(544) 00:10:27.809 fused_ordering(545) 00:10:27.809 fused_ordering(546) 00:10:27.809 fused_ordering(547) 00:10:27.809 fused_ordering(548) 00:10:27.809 fused_ordering(549) 00:10:27.809 fused_ordering(550) 00:10:27.809 fused_ordering(551) 00:10:27.809 fused_ordering(552) 00:10:27.809 fused_ordering(553) 00:10:27.809 fused_ordering(554) 00:10:27.809 fused_ordering(555) 00:10:27.809 fused_ordering(556) 00:10:27.809 fused_ordering(557) 00:10:27.809 fused_ordering(558) 00:10:27.809 fused_ordering(559) 00:10:27.809 fused_ordering(560) 00:10:27.809 fused_ordering(561) 00:10:27.809 fused_ordering(562) 00:10:27.809 fused_ordering(563) 00:10:27.809 fused_ordering(564) 00:10:27.809 fused_ordering(565) 00:10:27.809 fused_ordering(566) 00:10:27.809 fused_ordering(567) 00:10:27.809 fused_ordering(568) 00:10:27.809 fused_ordering(569) 00:10:27.809 fused_ordering(570) 00:10:27.809 fused_ordering(571) 00:10:27.809 fused_ordering(572) 00:10:27.809 fused_ordering(573) 00:10:27.809 fused_ordering(574) 00:10:27.809 fused_ordering(575) 00:10:27.809 fused_ordering(576) 00:10:27.809 fused_ordering(577) 00:10:27.809 fused_ordering(578) 00:10:27.809 fused_ordering(579) 00:10:27.809 fused_ordering(580) 00:10:27.809 fused_ordering(581) 00:10:27.809 fused_ordering(582) 00:10:27.809 fused_ordering(583) 00:10:27.809 fused_ordering(584) 00:10:27.809 fused_ordering(585) 00:10:27.809 fused_ordering(586) 00:10:27.809 fused_ordering(587) 00:10:27.809 fused_ordering(588) 00:10:27.809 fused_ordering(589) 00:10:27.809 fused_ordering(590) 00:10:27.809 fused_ordering(591) 00:10:27.809 fused_ordering(592) 00:10:27.809 fused_ordering(593) 00:10:27.809 fused_ordering(594) 00:10:27.809 fused_ordering(595) 00:10:27.809 fused_ordering(596) 00:10:27.809 fused_ordering(597) 00:10:27.809 fused_ordering(598) 00:10:27.809 fused_ordering(599) 00:10:27.809 fused_ordering(600) 00:10:27.809 fused_ordering(601) 00:10:27.809 fused_ordering(602) 00:10:27.809 fused_ordering(603) 00:10:27.809 fused_ordering(604) 00:10:27.809 fused_ordering(605) 00:10:27.809 fused_ordering(606) 00:10:27.809 fused_ordering(607) 00:10:27.809 fused_ordering(608) 00:10:27.809 fused_ordering(609) 00:10:27.809 fused_ordering(610) 00:10:27.809 fused_ordering(611) 00:10:27.809 fused_ordering(612) 00:10:27.809 fused_ordering(613) 00:10:27.809 fused_ordering(614) 00:10:27.809 fused_ordering(615) 00:10:28.068 fused_ordering(616) 00:10:28.068 fused_ordering(617) 00:10:28.068 fused_ordering(618) 00:10:28.068 fused_ordering(619) 00:10:28.068 fused_ordering(620) 00:10:28.068 fused_ordering(621) 00:10:28.068 fused_ordering(622) 00:10:28.068 fused_ordering(623) 00:10:28.068 fused_ordering(624) 00:10:28.068 fused_ordering(625) 00:10:28.068 fused_ordering(626) 00:10:28.068 fused_ordering(627) 00:10:28.068 fused_ordering(628) 00:10:28.068 fused_ordering(629) 00:10:28.068 fused_ordering(630) 00:10:28.068 fused_ordering(631) 00:10:28.068 fused_ordering(632) 00:10:28.068 fused_ordering(633) 00:10:28.068 fused_ordering(634) 00:10:28.068 fused_ordering(635) 00:10:28.068 fused_ordering(636) 00:10:28.068 fused_ordering(637) 00:10:28.068 fused_ordering(638) 00:10:28.068 fused_ordering(639) 00:10:28.068 fused_ordering(640) 00:10:28.068 fused_ordering(641) 00:10:28.068 fused_ordering(642) 00:10:28.068 fused_ordering(643) 00:10:28.068 fused_ordering(644) 00:10:28.068 fused_ordering(645) 00:10:28.068 fused_ordering(646) 00:10:28.068 fused_ordering(647) 00:10:28.068 fused_ordering(648) 00:10:28.068 fused_ordering(649) 00:10:28.068 fused_ordering(650) 00:10:28.068 fused_ordering(651) 00:10:28.068 fused_ordering(652) 00:10:28.068 fused_ordering(653) 00:10:28.068 fused_ordering(654) 00:10:28.068 fused_ordering(655) 00:10:28.068 fused_ordering(656) 00:10:28.068 fused_ordering(657) 00:10:28.068 fused_ordering(658) 00:10:28.068 fused_ordering(659) 00:10:28.068 fused_ordering(660) 00:10:28.068 fused_ordering(661) 00:10:28.068 fused_ordering(662) 00:10:28.068 fused_ordering(663) 00:10:28.068 fused_ordering(664) 00:10:28.068 fused_ordering(665) 00:10:28.068 fused_ordering(666) 00:10:28.068 fused_ordering(667) 00:10:28.068 fused_ordering(668) 00:10:28.068 fused_ordering(669) 00:10:28.068 fused_ordering(670) 00:10:28.068 fused_ordering(671) 00:10:28.068 fused_ordering(672) 00:10:28.068 fused_ordering(673) 00:10:28.068 fused_ordering(674) 00:10:28.068 fused_ordering(675) 00:10:28.068 fused_ordering(676) 00:10:28.068 fused_ordering(677) 00:10:28.068 fused_ordering(678) 00:10:28.068 fused_ordering(679) 00:10:28.068 fused_ordering(680) 00:10:28.068 fused_ordering(681) 00:10:28.068 fused_ordering(682) 00:10:28.068 fused_ordering(683) 00:10:28.068 fused_ordering(684) 00:10:28.068 fused_ordering(685) 00:10:28.068 fused_ordering(686) 00:10:28.068 fused_ordering(687) 00:10:28.068 fused_ordering(688) 00:10:28.068 fused_ordering(689) 00:10:28.068 fused_ordering(690) 00:10:28.068 fused_ordering(691) 00:10:28.068 fused_ordering(692) 00:10:28.068 fused_ordering(693) 00:10:28.068 fused_ordering(694) 00:10:28.068 fused_ordering(695) 00:10:28.068 fused_ordering(696) 00:10:28.068 fused_ordering(697) 00:10:28.068 fused_ordering(698) 00:10:28.068 fused_ordering(699) 00:10:28.068 fused_ordering(700) 00:10:28.068 fused_ordering(701) 00:10:28.068 fused_ordering(702) 00:10:28.068 fused_ordering(703) 00:10:28.068 fused_ordering(704) 00:10:28.068 fused_ordering(705) 00:10:28.068 fused_ordering(706) 00:10:28.068 fused_ordering(707) 00:10:28.068 fused_ordering(708) 00:10:28.068 fused_ordering(709) 00:10:28.068 fused_ordering(710) 00:10:28.068 fused_ordering(711) 00:10:28.068 fused_ordering(712) 00:10:28.068 fused_ordering(713) 00:10:28.068 fused_ordering(714) 00:10:28.068 fused_ordering(715) 00:10:28.068 fused_ordering(716) 00:10:28.068 fused_ordering(717) 00:10:28.068 fused_ordering(718) 00:10:28.068 fused_ordering(719) 00:10:28.068 fused_ordering(720) 00:10:28.068 fused_ordering(721) 00:10:28.068 fused_ordering(722) 00:10:28.068 fused_ordering(723) 00:10:28.068 fused_ordering(724) 00:10:28.068 fused_ordering(725) 00:10:28.068 fused_ordering(726) 00:10:28.068 fused_ordering(727) 00:10:28.068 fused_ordering(728) 00:10:28.068 fused_ordering(729) 00:10:28.068 fused_ordering(730) 00:10:28.068 fused_ordering(731) 00:10:28.068 fused_ordering(732) 00:10:28.068 fused_ordering(733) 00:10:28.068 fused_ordering(734) 00:10:28.068 fused_ordering(735) 00:10:28.068 fused_ordering(736) 00:10:28.068 fused_ordering(737) 00:10:28.068 fused_ordering(738) 00:10:28.068 fused_ordering(739) 00:10:28.068 fused_ordering(740) 00:10:28.068 fused_ordering(741) 00:10:28.068 fused_ordering(742) 00:10:28.068 fused_ordering(743) 00:10:28.068 fused_ordering(744) 00:10:28.068 fused_ordering(745) 00:10:28.068 fused_ordering(746) 00:10:28.068 fused_ordering(747) 00:10:28.068 fused_ordering(748) 00:10:28.068 fused_ordering(749) 00:10:28.068 fused_ordering(750) 00:10:28.068 fused_ordering(751) 00:10:28.068 fused_ordering(752) 00:10:28.068 fused_ordering(753) 00:10:28.068 fused_ordering(754) 00:10:28.068 fused_ordering(755) 00:10:28.068 fused_ordering(756) 00:10:28.068 fused_ordering(757) 00:10:28.068 fused_ordering(758) 00:10:28.068 fused_ordering(759) 00:10:28.068 fused_ordering(760) 00:10:28.068 fused_ordering(761) 00:10:28.068 fused_ordering(762) 00:10:28.068 fused_ordering(763) 00:10:28.068 fused_ordering(764) 00:10:28.068 fused_ordering(765) 00:10:28.068 fused_ordering(766) 00:10:28.068 fused_ordering(767) 00:10:28.068 fused_ordering(768) 00:10:28.068 fused_ordering(769) 00:10:28.068 fused_ordering(770) 00:10:28.068 fused_ordering(771) 00:10:28.068 fused_ordering(772) 00:10:28.068 fused_ordering(773) 00:10:28.068 fused_ordering(774) 00:10:28.068 fused_ordering(775) 00:10:28.068 fused_ordering(776) 00:10:28.068 fused_ordering(777) 00:10:28.068 fused_ordering(778) 00:10:28.068 fused_ordering(779) 00:10:28.068 fused_ordering(780) 00:10:28.068 fused_ordering(781) 00:10:28.068 fused_ordering(782) 00:10:28.068 fused_ordering(783) 00:10:28.068 fused_ordering(784) 00:10:28.068 fused_ordering(785) 00:10:28.068 fused_ordering(786) 00:10:28.068 fused_ordering(787) 00:10:28.068 fused_ordering(788) 00:10:28.068 fused_ordering(789) 00:10:28.068 fused_ordering(790) 00:10:28.068 fused_ordering(791) 00:10:28.068 fused_ordering(792) 00:10:28.068 fused_ordering(793) 00:10:28.068 fused_ordering(794) 00:10:28.068 fused_ordering(795) 00:10:28.068 fused_ordering(796) 00:10:28.068 fused_ordering(797) 00:10:28.068 fused_ordering(798) 00:10:28.068 fused_ordering(799) 00:10:28.068 fused_ordering(800) 00:10:28.068 fused_ordering(801) 00:10:28.069 fused_ordering(802) 00:10:28.069 fused_ordering(803) 00:10:28.069 fused_ordering(804) 00:10:28.069 fused_ordering(805) 00:10:28.069 fused_ordering(806) 00:10:28.069 fused_ordering(807) 00:10:28.069 fused_ordering(808) 00:10:28.069 fused_ordering(809) 00:10:28.069 fused_ordering(810) 00:10:28.069 fused_ordering(811) 00:10:28.069 fused_ordering(812) 00:10:28.069 fused_ordering(813) 00:10:28.069 fused_ordering(814) 00:10:28.069 fused_ordering(815) 00:10:28.069 fused_ordering(816) 00:10:28.069 fused_ordering(817) 00:10:28.069 fused_ordering(818) 00:10:28.069 fused_ordering(819) 00:10:28.069 fused_ordering(820) 00:10:28.635 fused_o[2024-07-16 01:15:54.442615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212b750 is same with the state(5) to be set 00:10:28.635 rdering(821) 00:10:28.635 fused_ordering(822) 00:10:28.635 fused_ordering(823) 00:10:28.635 fused_ordering(824) 00:10:28.635 fused_ordering(825) 00:10:28.635 fused_ordering(826) 00:10:28.635 fused_ordering(827) 00:10:28.635 fused_ordering(828) 00:10:28.635 fused_ordering(829) 00:10:28.635 fused_ordering(830) 00:10:28.635 fused_ordering(831) 00:10:28.635 fused_ordering(832) 00:10:28.635 fused_ordering(833) 00:10:28.635 fused_ordering(834) 00:10:28.635 fused_ordering(835) 00:10:28.635 fused_ordering(836) 00:10:28.635 fused_ordering(837) 00:10:28.635 fused_ordering(838) 00:10:28.635 fused_ordering(839) 00:10:28.635 fused_ordering(840) 00:10:28.635 fused_ordering(841) 00:10:28.635 fused_ordering(842) 00:10:28.635 fused_ordering(843) 00:10:28.635 fused_ordering(844) 00:10:28.635 fused_ordering(845) 00:10:28.635 fused_ordering(846) 00:10:28.635 fused_ordering(847) 00:10:28.635 fused_ordering(848) 00:10:28.635 fused_ordering(849) 00:10:28.635 fused_ordering(850) 00:10:28.635 fused_ordering(851) 00:10:28.635 fused_ordering(852) 00:10:28.635 fused_ordering(853) 00:10:28.635 fused_ordering(854) 00:10:28.635 fused_ordering(855) 00:10:28.635 fused_ordering(856) 00:10:28.635 fused_ordering(857) 00:10:28.635 fused_ordering(858) 00:10:28.635 fused_ordering(859) 00:10:28.635 fused_ordering(860) 00:10:28.635 fused_ordering(861) 00:10:28.635 fused_ordering(862) 00:10:28.635 fused_ordering(863) 00:10:28.635 fused_ordering(864) 00:10:28.635 fused_ordering(865) 00:10:28.635 fused_ordering(866) 00:10:28.635 fused_ordering(867) 00:10:28.635 fused_ordering(868) 00:10:28.635 fused_ordering(869) 00:10:28.635 fused_ordering(870) 00:10:28.635 fused_ordering(871) 00:10:28.635 fused_ordering(872) 00:10:28.635 fused_ordering(873) 00:10:28.635 fused_ordering(874) 00:10:28.635 fused_ordering(875) 00:10:28.635 fused_ordering(876) 00:10:28.635 fused_ordering(877) 00:10:28.635 fused_ordering(878) 00:10:28.635 fused_ordering(879) 00:10:28.635 fused_ordering(880) 00:10:28.635 fused_ordering(881) 00:10:28.635 fused_ordering(882) 00:10:28.635 fused_ordering(883) 00:10:28.635 fused_ordering(884) 00:10:28.635 fused_ordering(885) 00:10:28.635 fused_ordering(886) 00:10:28.635 fused_ordering(887) 00:10:28.635 fused_ordering(888) 00:10:28.635 fused_ordering(889) 00:10:28.635 fused_ordering(890) 00:10:28.635 fused_ordering(891) 00:10:28.635 fused_ordering(892) 00:10:28.635 fused_ordering(893) 00:10:28.635 fused_ordering(894) 00:10:28.635 fused_ordering(895) 00:10:28.635 fused_ordering(896) 00:10:28.635 fused_ordering(897) 00:10:28.635 fused_ordering(898) 00:10:28.635 fused_ordering(899) 00:10:28.635 fused_ordering(900) 00:10:28.635 fused_ordering(901) 00:10:28.636 fused_ordering(902) 00:10:28.636 fused_ordering(903) 00:10:28.636 fused_ordering(904) 00:10:28.636 fused_ordering(905) 00:10:28.636 fused_ordering(906) 00:10:28.636 fused_ordering(907) 00:10:28.636 fused_ordering(908) 00:10:28.636 fused_ordering(909) 00:10:28.636 fused_ordering(910) 00:10:28.636 fused_ordering(911) 00:10:28.636 fused_ordering(912) 00:10:28.636 fused_ordering(913) 00:10:28.636 fused_ordering(914) 00:10:28.636 fused_ordering(915) 00:10:28.636 fused_ordering(916) 00:10:28.636 fused_ordering(917) 00:10:28.636 fused_ordering(918) 00:10:28.636 fused_ordering(919) 00:10:28.636 fused_ordering(920) 00:10:28.636 fused_ordering(921) 00:10:28.636 fused_ordering(922) 00:10:28.636 fused_ordering(923) 00:10:28.636 fused_ordering(924) 00:10:28.636 fused_ordering(925) 00:10:28.636 fused_ordering(926) 00:10:28.636 fused_ordering(927) 00:10:28.636 fused_ordering(928) 00:10:28.636 fused_ordering(929) 00:10:28.636 fused_ordering(930) 00:10:28.636 fused_ordering(931) 00:10:28.636 fused_ordering(932) 00:10:28.636 fused_ordering(933) 00:10:28.636 fused_ordering(934) 00:10:28.636 fused_ordering(935) 00:10:28.636 fused_ordering(936) 00:10:28.636 fused_ordering(937) 00:10:28.636 fused_ordering(938) 00:10:28.636 fused_ordering(939) 00:10:28.636 fused_ordering(940) 00:10:28.636 fused_ordering(941) 00:10:28.636 fused_ordering(942) 00:10:28.636 fused_ordering(943) 00:10:28.636 fused_ordering(944) 00:10:28.636 fused_ordering(945) 00:10:28.636 fused_ordering(946) 00:10:28.636 fused_ordering(947) 00:10:28.636 fused_ordering(948) 00:10:28.636 fused_ordering(949) 00:10:28.636 fused_ordering(950) 00:10:28.636 fused_ordering(951) 00:10:28.636 fused_ordering(952) 00:10:28.636 fused_ordering(953) 00:10:28.636 fused_ordering(954) 00:10:28.636 fused_ordering(955) 00:10:28.636 fused_ordering(956) 00:10:28.636 fused_ordering(957) 00:10:28.636 fused_ordering(958) 00:10:28.636 fused_ordering(959) 00:10:28.636 fused_ordering(960) 00:10:28.636 fused_ordering(961) 00:10:28.636 fused_ordering(962) 00:10:28.636 fused_ordering(963) 00:10:28.636 fused_ordering(964) 00:10:28.636 fused_ordering(965) 00:10:28.636 fused_ordering(966) 00:10:28.636 fused_ordering(967) 00:10:28.636 fused_ordering(968) 00:10:28.636 fused_ordering(969) 00:10:28.636 fused_ordering(970) 00:10:28.636 fused_ordering(971) 00:10:28.636 fused_ordering(972) 00:10:28.636 fused_ordering(973) 00:10:28.636 fused_ordering(974) 00:10:28.636 fused_ordering(975) 00:10:28.636 fused_ordering(976) 00:10:28.636 fused_ordering(977) 00:10:28.636 fused_ordering(978) 00:10:28.636 fused_ordering(979) 00:10:28.636 fused_ordering(980) 00:10:28.636 fused_ordering(981) 00:10:28.636 fused_ordering(982) 00:10:28.636 fused_ordering(983) 00:10:28.636 fused_ordering(984) 00:10:28.636 fused_ordering(985) 00:10:28.636 fused_ordering(986) 00:10:28.636 fused_ordering(987) 00:10:28.636 fused_ordering(988) 00:10:28.636 fused_ordering(989) 00:10:28.636 fused_ordering(990) 00:10:28.636 fused_ordering(991) 00:10:28.636 fused_ordering(992) 00:10:28.636 fused_ordering(993) 00:10:28.636 fused_ordering(994) 00:10:28.636 fused_ordering(995) 00:10:28.636 fused_ordering(996) 00:10:28.636 fused_ordering(997) 00:10:28.636 fused_ordering(998) 00:10:28.636 fused_ordering(999) 00:10:28.636 fused_ordering(1000) 00:10:28.636 fused_ordering(1001) 00:10:28.636 fused_ordering(1002) 00:10:28.636 fused_ordering(1003) 00:10:28.636 fused_ordering(1004) 00:10:28.636 fused_ordering(1005) 00:10:28.636 fused_ordering(1006) 00:10:28.636 fused_ordering(1007) 00:10:28.636 fused_ordering(1008) 00:10:28.636 fused_ordering(1009) 00:10:28.636 fused_ordering(1010) 00:10:28.636 fused_ordering(1011) 00:10:28.636 fused_ordering(1012) 00:10:28.636 fused_ordering(1013) 00:10:28.636 fused_ordering(1014) 00:10:28.636 fused_ordering(1015) 00:10:28.636 fused_ordering(1016) 00:10:28.636 fused_ordering(1017) 00:10:28.636 fused_ordering(1018) 00:10:28.636 fused_ordering(1019) 00:10:28.636 fused_ordering(1020) 00:10:28.636 fused_ordering(1021) 00:10:28.636 fused_ordering(1022) 00:10:28.636 fused_ordering(1023) 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:28.636 rmmod nvme_tcp 00:10:28.636 rmmod nvme_fabrics 00:10:28.636 rmmod nvme_keyring 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3291603 ']' 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3291603 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3291603 ']' 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3291603 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3291603 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3291603' 00:10:28.636 killing process with pid 3291603 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3291603 00:10:28.636 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3291603 00:10:28.895 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:28.895 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:28.895 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:28.895 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:28.895 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:28.895 01:15:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.895 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.895 01:15:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.434 01:15:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:31.434 00:10:31.434 real 0m10.360s 00:10:31.434 user 0m5.267s 00:10:31.434 sys 0m5.154s 00:10:31.434 01:15:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.434 01:15:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:31.434 ************************************ 00:10:31.434 END TEST nvmf_fused_ordering 00:10:31.434 ************************************ 00:10:31.434 01:15:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:31.434 01:15:56 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:31.434 01:15:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:31.434 01:15:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.434 01:15:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.434 ************************************ 00:10:31.434 START TEST nvmf_delete_subsystem 00:10:31.434 ************************************ 00:10:31.434 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:31.434 * Looking for test storage... 00:10:31.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:31.435 01:15:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:36.713 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:36.713 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.713 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:36.714 Found net devices under 0000:86:00.0: cvl_0_0 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:36.714 Found net devices under 0000:86:00.1: cvl_0_1 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:36.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:10:36.714 00:10:36.714 --- 10.0.0.2 ping statistics --- 00:10:36.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.714 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:36.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:10:36.714 00:10:36.714 --- 10.0.0.1 ping statistics --- 00:10:36.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.714 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3295447 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3295447 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3295447 ']' 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.714 01:16:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.714 [2024-07-16 01:16:02.371543] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:10:36.714 [2024-07-16 01:16:02.371584] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.714 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.714 [2024-07-16 01:16:02.430826] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:36.714 [2024-07-16 01:16:02.506530] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.714 [2024-07-16 01:16:02.506567] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.714 [2024-07-16 01:16:02.506573] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.714 [2024-07-16 01:16:02.506581] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.714 [2024-07-16 01:16:02.506586] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.714 [2024-07-16 01:16:02.510356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.714 [2024-07-16 01:16:02.510361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.281 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.281 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:37.281 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:37.281 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:37.281 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.281 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.282 [2024-07-16 01:16:03.194854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.282 [2024-07-16 01:16:03.215007] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.282 NULL1 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.282 Delay0 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3295691 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:37.282 01:16:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:37.540 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.540 [2024-07-16 01:16:03.295648] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:39.567 01:16:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.567 01:16:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.567 01:16:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 starting I/O failed: -6 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 starting I/O failed: -6 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 starting I/O failed: -6 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 starting I/O failed: -6 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 starting I/O failed: -6 00:10:39.567 Write completed with error (sct=0, sc=8) 00:10:39.567 Write completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 starting I/O failed: -6 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Write completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 starting I/O failed: -6 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 starting I/O failed: -6 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Write completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 starting I/O failed: -6 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 starting I/O failed: -6 00:10:39.567 Write completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 Read completed with error (sct=0, sc=8) 00:10:39.567 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 [2024-07-16 01:16:05.516616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2af0 is same with the state(5) to be set 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Read completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 Write completed with error (sct=0, sc=8) 00:10:39.568 starting I/O failed: -6 00:10:39.569 Read completed with error (sct=0, sc=8) 00:10:39.569 starting I/O failed: -6 00:10:39.569 Write completed with error (sct=0, sc=8) 00:10:39.569 starting I/O failed: -6 00:10:39.569 Read completed with error (sct=0, sc=8) 00:10:39.569 Read completed with error (sct=0, sc=8) 00:10:39.569 starting I/O failed: -6 00:10:39.569 Read completed with error (sct=0, sc=8) 00:10:39.569 starting I/O failed: -6 00:10:39.569 Read completed with error (sct=0, sc=8) 00:10:39.569 starting I/O failed: -6 00:10:39.569 Read completed with error (sct=0, sc=8) 00:10:39.569 Write completed with error (sct=0, sc=8) 00:10:39.569 starting I/O failed: -6 00:10:39.569 Read completed with error (sct=0, sc=8) 00:10:39.569 starting I/O failed: -6 00:10:39.569 Read completed with error (sct=0, sc=8) 00:10:39.569 starting I/O failed: -6 00:10:39.569 Read completed with error (sct=0, sc=8) 00:10:39.569 Read completed with error (sct=0, sc=8) 00:10:39.569 starting I/O failed: -6 00:10:39.569 Read completed with error (sct=0, sc=8) 00:10:39.569 starting I/O failed: -6 00:10:39.569 [2024-07-16 01:16:05.517982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0d98000c00 is same with the state(5) to be set 00:10:40.505 [2024-07-16 01:16:06.433728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e3a70 is same with the state(5) to be set 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Write completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.764 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 [2024-07-16 01:16:06.519495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0d9800d370 is same with the state(5) to be set 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 [2024-07-16 01:16:06.520628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e27a0 is same with the state(5) to be set 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 [2024-07-16 01:16:06.520778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2e40 is same with the state(5) to be set 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Read completed with error (sct=0, sc=8) 00:10:40.765 Write completed with error (sct=0, sc=8) 00:10:40.765 [2024-07-16 01:16:06.520921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2390 is same with the state(5) to be set 00:10:40.765 Initializing NVMe Controllers 00:10:40.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:40.765 Controller IO queue size 128, less than required. 00:10:40.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:40.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:40.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:40.765 Initialization complete. Launching workers. 00:10:40.765 ======================================================== 00:10:40.765 Latency(us) 00:10:40.765 Device Information : IOPS MiB/s Average min max 00:10:40.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.53 0.09 958001.29 1658.99 1059434.98 00:10:40.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 173.88 0.08 854818.05 327.48 1060305.64 00:10:40.765 ======================================================== 00:10:40.765 Total : 358.41 0.18 907943.47 327.48 1060305.64 00:10:40.765 00:10:40.765 [2024-07-16 01:16:06.521492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22e3a70 (9): Bad file descriptor 00:10:40.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:40.765 01:16:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.765 01:16:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:40.765 01:16:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3295691 00:10:40.765 01:16:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3295691 00:10:41.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3295691) - No such process 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3295691 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3295691 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3295691 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 [2024-07-16 01:16:07.046829] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3296348 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3296348 00:10:41.333 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:41.333 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.333 [2024-07-16 01:16:07.106974] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:41.592 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:41.592 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3296348 00:10:41.592 01:16:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:42.160 01:16:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:42.160 01:16:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3296348 00:10:42.160 01:16:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:42.727 01:16:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:42.727 01:16:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3296348 00:10:42.727 01:16:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:43.294 01:16:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:43.294 01:16:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3296348 00:10:43.294 01:16:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:43.862 01:16:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:43.862 01:16:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3296348 00:10:43.862 01:16:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:44.121 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:44.121 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3296348 00:10:44.121 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:44.379 Initializing NVMe Controllers 00:10:44.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:44.379 Controller IO queue size 128, less than required. 00:10:44.379 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:44.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:44.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:44.379 Initialization complete. Launching workers. 00:10:44.379 ======================================================== 00:10:44.379 Latency(us) 00:10:44.379 Device Information : IOPS MiB/s Average min max 00:10:44.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1011411.39 1000146.11 1058993.79 00:10:44.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1013381.30 1000220.19 1060595.92 00:10:44.379 ======================================================== 00:10:44.379 Total : 256.00 0.12 1012396.34 1000146.11 1060595.92 00:10:44.379 00:10:44.638 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:44.638 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3296348 00:10:44.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3296348) - No such process 00:10:44.638 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3296348 00:10:44.638 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:44.638 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:44.638 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:44.638 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:44.638 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:44.638 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:44.638 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:44.638 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:44.638 rmmod nvme_tcp 00:10:44.638 rmmod nvme_fabrics 00:10:44.897 rmmod nvme_keyring 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3295447 ']' 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3295447 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3295447 ']' 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3295447 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3295447 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3295447' 00:10:44.897 killing process with pid 3295447 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3295447 00:10:44.897 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3295447 00:10:45.155 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:45.155 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:45.155 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:45.155 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:45.155 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:45.155 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.155 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:45.155 01:16:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.075 01:16:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:47.075 00:10:47.075 real 0m16.086s 00:10:47.075 user 0m30.494s 00:10:47.075 sys 0m4.960s 00:10:47.075 01:16:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:47.075 01:16:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.075 ************************************ 00:10:47.075 END TEST nvmf_delete_subsystem 00:10:47.075 ************************************ 00:10:47.075 01:16:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:47.075 01:16:12 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:47.075 01:16:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:47.075 01:16:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.075 01:16:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:47.075 ************************************ 00:10:47.075 START TEST nvmf_ns_masking 00:10:47.075 ************************************ 00:10:47.075 01:16:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:47.332 * Looking for test storage... 00:10:47.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=62194444-e3ee-41c8-a5bd-11e17a2960bc 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=de85b736-366f-4a38-84ce-793c35af675c 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ab94c945-c28b-4945-8fef-f568aaca1c2c 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:47.332 01:16:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:47.333 01:16:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:52.601 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:52.602 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:52.602 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:52.602 Found net devices under 0000:86:00.0: cvl_0_0 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:52.602 Found net devices under 0000:86:00.1: cvl_0_1 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:52.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:10:52.602 00:10:52.602 --- 10.0.0.2 ping statistics --- 00:10:52.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.602 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:10:52.602 00:10:52.602 --- 10.0.0.1 ping statistics --- 00:10:52.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.602 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3300372 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3300372 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3300372 ']' 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.602 01:16:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:52.602 [2024-07-16 01:16:18.490842] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:10:52.602 [2024-07-16 01:16:18.490888] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.602 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.602 [2024-07-16 01:16:18.550927] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.860 [2024-07-16 01:16:18.629320] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.861 [2024-07-16 01:16:18.629358] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.861 [2024-07-16 01:16:18.629365] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.861 [2024-07-16 01:16:18.629371] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.861 [2024-07-16 01:16:18.629375] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.861 [2024-07-16 01:16:18.629393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.427 01:16:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:53.427 01:16:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:53.427 01:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:53.427 01:16:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:53.427 01:16:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:53.427 01:16:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.427 01:16:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:53.685 [2024-07-16 01:16:19.474769] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.685 01:16:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:53.685 01:16:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:53.685 01:16:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:53.685 Malloc1 00:10:53.944 01:16:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:53.944 Malloc2 00:10:53.944 01:16:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.202 01:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:54.461 01:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.461 [2024-07-16 01:16:20.360635] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.461 01:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:54.461 01:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ab94c945-c28b-4945-8fef-f568aaca1c2c -a 10.0.0.2 -s 4420 -i 4 00:10:54.720 01:16:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:54.720 01:16:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:54.720 01:16:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.720 01:16:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:54.720 01:16:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:56.620 01:16:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:56.620 01:16:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.620 01:16:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:56.620 01:16:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:56.620 01:16:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:56.878 [ 0]:0x1 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de46be79d55e422cad6d59d53add6dee 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de46be79d55e422cad6d59d53add6dee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:56.878 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:57.135 [ 0]:0x1 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de46be79d55e422cad6d59d53add6dee 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de46be79d55e422cad6d59d53add6dee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:57.135 [ 1]:0x2 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:57.135 01:16:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:57.135 01:16:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0976a3cd93ff4f5698735e11175537f5 00:10:57.135 01:16:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0976a3cd93ff4f5698735e11175537f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:57.135 01:16:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:57.135 01:16:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.393 01:16:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.652 01:16:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:57.652 01:16:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:57.652 01:16:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ab94c945-c28b-4945-8fef-f568aaca1c2c -a 10.0.0.2 -s 4420 -i 4 00:10:57.909 01:16:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:57.909 01:16:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:57.909 01:16:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.909 01:16:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:10:57.909 01:16:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:10:57.909 01:16:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:59.805 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:59.805 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.805 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:59.805 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:59.805 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.805 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:59.805 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:59.805 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:00.062 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:00.063 01:16:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:00.063 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:00.063 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:00.063 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:00.063 [ 0]:0x2 00:11:00.063 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:00.063 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:00.063 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0976a3cd93ff4f5698735e11175537f5 00:11:00.063 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0976a3cd93ff4f5698735e11175537f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:00.063 01:16:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:00.320 [ 0]:0x1 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de46be79d55e422cad6d59d53add6dee 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de46be79d55e422cad6d59d53add6dee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:00.320 [ 1]:0x2 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:00.320 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0976a3cd93ff4f5698735e11175537f5 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0976a3cd93ff4f5698735e11175537f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:00.578 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:00.837 [ 0]:0x2 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0976a3cd93ff4f5698735e11175537f5 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0976a3cd93ff4f5698735e11175537f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:00.837 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ab94c945-c28b-4945-8fef-f568aaca1c2c -a 10.0.0.2 -s 4420 -i 4 00:11:01.095 01:16:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:01.095 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:01.095 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:01.095 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:01.095 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:01.095 01:16:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:03.000 01:16:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:03.000 01:16:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:03.000 01:16:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.000 01:16:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:03.000 01:16:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.000 01:16:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:03.000 01:16:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:03.000 01:16:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:03.000 01:16:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:03.000 01:16:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:03.000 01:16:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:03.259 01:16:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.259 01:16:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:03.259 [ 0]:0x1 00:11:03.259 01:16:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:03.259 01:16:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.259 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de46be79d55e422cad6d59d53add6dee 00:11:03.259 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de46be79d55e422cad6d59d53add6dee != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.259 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:03.259 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.259 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:03.259 [ 1]:0x2 00:11:03.259 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:03.259 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.259 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0976a3cd93ff4f5698735e11175537f5 00:11:03.259 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0976a3cd93ff4f5698735e11175537f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.259 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:03.521 [ 0]:0x2 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0976a3cd93ff4f5698735e11175537f5 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0976a3cd93ff4f5698735e11175537f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:03.521 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:03.780 [2024-07-16 01:16:29.526595] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:03.780 request: 00:11:03.780 { 00:11:03.780 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:03.780 "nsid": 2, 00:11:03.780 "host": "nqn.2016-06.io.spdk:host1", 00:11:03.780 "method": "nvmf_ns_remove_host", 00:11:03.780 "req_id": 1 00:11:03.780 } 00:11:03.780 Got JSON-RPC error response 00:11:03.780 response: 00:11:03.780 { 00:11:03.780 "code": -32602, 00:11:03.780 "message": "Invalid parameters" 00:11:03.780 } 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.780 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:03.781 [ 0]:0x2 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0976a3cd93ff4f5698735e11175537f5 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0976a3cd93ff4f5698735e11175537f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:03.781 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.040 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3302380 00:11:04.040 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.040 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3302380 /var/tmp/host.sock 00:11:04.040 01:16:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:04.040 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3302380 ']' 00:11:04.040 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:04.040 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.040 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:04.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:04.040 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.040 01:16:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:04.040 [2024-07-16 01:16:29.866310] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:11:04.040 [2024-07-16 01:16:29.866357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302380 ] 00:11:04.040 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.040 [2024-07-16 01:16:29.919216] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.040 [2024-07-16 01:16:29.990653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.976 01:16:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.976 01:16:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:04.976 01:16:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.976 01:16:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:05.234 01:16:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 62194444-e3ee-41c8-a5bd-11e17a2960bc 00:11:05.234 01:16:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:05.234 01:16:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 62194444E3EE41C8A5BD11E17A2960BC -i 00:11:05.234 01:16:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid de85b736-366f-4a38-84ce-793c35af675c 00:11:05.234 01:16:31 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:05.234 01:16:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DE85B736366F4A3884CE793C35AF675C -i 00:11:05.493 01:16:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:05.789 01:16:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:05.789 01:16:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:05.789 01:16:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:06.073 nvme0n1 00:11:06.331 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:06.331 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:06.590 nvme1n2 00:11:06.590 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:06.590 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:06.590 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:06.590 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:06.590 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:06.849 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:06.849 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:06.849 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:06.849 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:06.849 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 62194444-e3ee-41c8-a5bd-11e17a2960bc == \6\2\1\9\4\4\4\4\-\e\3\e\e\-\4\1\c\8\-\a\5\b\d\-\1\1\e\1\7\a\2\9\6\0\b\c ]] 00:11:06.849 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:06.849 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:06.849 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:07.109 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ de85b736-366f-4a38-84ce-793c35af675c == \d\e\8\5\b\7\3\6\-\3\6\6\f\-\4\a\3\8\-\8\4\c\e\-\7\9\3\c\3\5\a\f\6\7\5\c ]] 00:11:07.109 01:16:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3302380 00:11:07.109 01:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3302380 ']' 00:11:07.109 01:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3302380 00:11:07.109 01:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:07.109 01:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.109 01:16:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3302380 00:11:07.109 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:07.109 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:07.109 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3302380' 00:11:07.109 killing process with pid 3302380 00:11:07.109 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3302380 00:11:07.109 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3302380 00:11:07.379 01:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:07.651 rmmod nvme_tcp 00:11:07.651 rmmod nvme_fabrics 00:11:07.651 rmmod nvme_keyring 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3300372 ']' 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3300372 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3300372 ']' 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3300372 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3300372 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3300372' 00:11:07.651 killing process with pid 3300372 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3300372 00:11:07.651 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3300372 00:11:07.910 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:07.910 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:07.910 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:07.910 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.910 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:07.910 01:16:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.910 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.910 01:16:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.445 01:16:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.445 00:11:10.446 real 0m22.860s 00:11:10.446 user 0m24.745s 00:11:10.446 sys 0m5.949s 00:11:10.446 01:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.446 01:16:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:10.446 ************************************ 00:11:10.446 END TEST nvmf_ns_masking 00:11:10.446 ************************************ 00:11:10.446 01:16:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:10.446 01:16:35 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:10.446 01:16:35 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:10.446 01:16:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:10.446 01:16:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.446 01:16:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.446 ************************************ 00:11:10.446 START TEST nvmf_nvme_cli 00:11:10.446 ************************************ 00:11:10.446 01:16:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:10.446 * Looking for test storage... 00:11:10.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:10.446 01:16:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:15.723 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:15.723 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:15.723 Found net devices under 0000:86:00.0: cvl_0_0 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:15.723 Found net devices under 0000:86:00.1: cvl_0_1 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:15.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:11:15.723 00:11:15.723 --- 10.0.0.2 ping statistics --- 00:11:15.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.723 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:11:15.723 00:11:15.723 --- 10.0.0.1 ping statistics --- 00:11:15.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.723 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:15.723 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3306407 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3306407 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3306407 ']' 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.724 01:16:41 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:15.724 [2024-07-16 01:16:41.534059] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:11:15.724 [2024-07-16 01:16:41.534103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.724 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.724 [2024-07-16 01:16:41.594895] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.724 [2024-07-16 01:16:41.675910] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.724 [2024-07-16 01:16:41.675946] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.724 [2024-07-16 01:16:41.675953] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.724 [2024-07-16 01:16:41.675960] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.724 [2024-07-16 01:16:41.675964] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.724 [2024-07-16 01:16:41.676007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.724 [2024-07-16 01:16:41.676101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.724 [2024-07-16 01:16:41.676121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.724 [2024-07-16 01:16:41.676122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.660 [2024-07-16 01:16:42.389330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.660 Malloc0 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.660 Malloc1 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.660 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.661 [2024-07-16 01:16:42.470039] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:16.661 00:11:16.661 Discovery Log Number of Records 2, Generation counter 2 00:11:16.661 =====Discovery Log Entry 0====== 00:11:16.661 trtype: tcp 00:11:16.661 adrfam: ipv4 00:11:16.661 subtype: current discovery subsystem 00:11:16.661 treq: not required 00:11:16.661 portid: 0 00:11:16.661 trsvcid: 4420 00:11:16.661 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:16.661 traddr: 10.0.0.2 00:11:16.661 eflags: explicit discovery connections, duplicate discovery information 00:11:16.661 sectype: none 00:11:16.661 =====Discovery Log Entry 1====== 00:11:16.661 trtype: tcp 00:11:16.661 adrfam: ipv4 00:11:16.661 subtype: nvme subsystem 00:11:16.661 treq: not required 00:11:16.661 portid: 0 00:11:16.661 trsvcid: 4420 00:11:16.661 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:16.661 traddr: 10.0.0.2 00:11:16.661 eflags: none 00:11:16.661 sectype: none 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:16.661 01:16:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.037 01:16:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:18.037 01:16:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:18.037 01:16:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.037 01:16:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:18.037 01:16:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:18.037 01:16:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:19.936 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:19.936 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:19.936 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.936 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:19.936 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.936 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:19.936 01:16:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:19.936 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:19.936 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:19.937 /dev/nvme0n1 ]] 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:19.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:19.937 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:19.937 rmmod nvme_tcp 00:11:19.937 rmmod nvme_fabrics 00:11:20.194 rmmod nvme_keyring 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3306407 ']' 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3306407 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3306407 ']' 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3306407 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3306407 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3306407' 00:11:20.195 killing process with pid 3306407 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3306407 00:11:20.195 01:16:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3306407 00:11:20.453 01:16:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.453 01:16:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.453 01:16:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.453 01:16:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.453 01:16:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.453 01:16:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.453 01:16:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.453 01:16:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.354 01:16:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:22.354 00:11:22.354 real 0m12.326s 00:11:22.354 user 0m19.653s 00:11:22.354 sys 0m4.638s 00:11:22.354 01:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.354 01:16:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.354 ************************************ 00:11:22.354 END TEST nvmf_nvme_cli 00:11:22.354 ************************************ 00:11:22.354 01:16:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:22.354 01:16:48 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:22.354 01:16:48 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:22.354 01:16:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:22.354 01:16:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.354 01:16:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:22.613 ************************************ 00:11:22.614 START TEST nvmf_vfio_user 00:11:22.614 ************************************ 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:22.614 * Looking for test storage... 00:11:22.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3307698 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3307698' 00:11:22.614 Process pid: 3307698 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3307698 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3307698 ']' 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.614 01:16:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:22.614 [2024-07-16 01:16:48.510068] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:11:22.614 [2024-07-16 01:16:48.510120] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.614 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.614 [2024-07-16 01:16:48.565926] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.873 [2024-07-16 01:16:48.645746] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.873 [2024-07-16 01:16:48.645783] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.873 [2024-07-16 01:16:48.645790] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.873 [2024-07-16 01:16:48.645795] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.873 [2024-07-16 01:16:48.645801] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.873 [2024-07-16 01:16:48.645845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.873 [2024-07-16 01:16:48.645938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.873 [2024-07-16 01:16:48.646026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.873 [2024-07-16 01:16:48.646027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.440 01:16:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:23.440 01:16:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:23.440 01:16:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:24.376 01:16:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:24.634 01:16:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:24.634 01:16:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:24.634 01:16:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:24.634 01:16:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:24.634 01:16:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:24.901 Malloc1 00:11:24.901 01:16:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:24.901 01:16:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:25.159 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:25.416 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:25.416 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:25.416 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:25.674 Malloc2 00:11:25.674 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:25.674 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:25.931 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:26.190 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:26.190 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:26.190 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:26.190 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:26.190 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:26.190 01:16:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:26.190 [2024-07-16 01:16:52.000808] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:11:26.190 [2024-07-16 01:16:52.000840] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3308395 ] 00:11:26.190 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.190 [2024-07-16 01:16:52.029565] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:26.190 [2024-07-16 01:16:52.036673] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:26.190 [2024-07-16 01:16:52.036692] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe1c815b000 00:11:26.190 [2024-07-16 01:16:52.037673] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:26.190 [2024-07-16 01:16:52.038669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:26.190 [2024-07-16 01:16:52.039670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:26.190 [2024-07-16 01:16:52.040677] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:26.190 [2024-07-16 01:16:52.041679] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:26.190 [2024-07-16 01:16:52.042684] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:26.190 [2024-07-16 01:16:52.043687] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:26.190 [2024-07-16 01:16:52.044693] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:26.190 [2024-07-16 01:16:52.045703] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:26.190 [2024-07-16 01:16:52.045715] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe1c8150000 00:11:26.190 [2024-07-16 01:16:52.046744] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:26.190 [2024-07-16 01:16:52.059410] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:26.190 [2024-07-16 01:16:52.059439] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:26.190 [2024-07-16 01:16:52.064810] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:26.190 [2024-07-16 01:16:52.064849] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:26.190 [2024-07-16 01:16:52.064926] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:26.190 [2024-07-16 01:16:52.064944] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:26.190 [2024-07-16 01:16:52.064952] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:26.191 [2024-07-16 01:16:52.065813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:26.191 [2024-07-16 01:16:52.065822] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:26.191 [2024-07-16 01:16:52.065828] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:26.191 [2024-07-16 01:16:52.066817] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:26.191 [2024-07-16 01:16:52.066825] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:26.191 [2024-07-16 01:16:52.066831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:26.191 [2024-07-16 01:16:52.067827] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:26.191 [2024-07-16 01:16:52.067836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:26.191 [2024-07-16 01:16:52.068829] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:26.191 [2024-07-16 01:16:52.068837] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:26.191 [2024-07-16 01:16:52.068841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:26.191 [2024-07-16 01:16:52.068847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:26.191 [2024-07-16 01:16:52.068953] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:26.191 [2024-07-16 01:16:52.068957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:26.191 [2024-07-16 01:16:52.068962] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:26.191 [2024-07-16 01:16:52.069831] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:26.191 [2024-07-16 01:16:52.070837] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:26.191 [2024-07-16 01:16:52.071845] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:26.191 [2024-07-16 01:16:52.072843] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:26.191 [2024-07-16 01:16:52.072907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:26.191 [2024-07-16 01:16:52.073854] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:26.191 [2024-07-16 01:16:52.073861] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:26.191 [2024-07-16 01:16:52.073865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.073882] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:26.191 [2024-07-16 01:16:52.073889] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.073903] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:26.191 [2024-07-16 01:16:52.073907] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:26.191 [2024-07-16 01:16:52.073921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:26.191 [2024-07-16 01:16:52.073961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:26.191 [2024-07-16 01:16:52.073970] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:26.191 [2024-07-16 01:16:52.073974] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:26.191 [2024-07-16 01:16:52.073978] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:26.191 [2024-07-16 01:16:52.073982] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:26.191 [2024-07-16 01:16:52.073987] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:26.191 [2024-07-16 01:16:52.073991] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:26.191 [2024-07-16 01:16:52.073995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:26.191 [2024-07-16 01:16:52.074025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:26.191 [2024-07-16 01:16:52.074035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.191 [2024-07-16 01:16:52.074042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.191 [2024-07-16 01:16:52.074051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.191 [2024-07-16 01:16:52.074058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.191 [2024-07-16 01:16:52.074062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:26.191 [2024-07-16 01:16:52.074086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:26.191 [2024-07-16 01:16:52.074091] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:26.191 [2024-07-16 01:16:52.074096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:26.191 [2024-07-16 01:16:52.074130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:26.191 [2024-07-16 01:16:52.074180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074194] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:26.191 [2024-07-16 01:16:52.074198] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:26.191 [2024-07-16 01:16:52.074203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:26.191 [2024-07-16 01:16:52.074216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:26.191 [2024-07-16 01:16:52.074225] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:26.191 [2024-07-16 01:16:52.074237] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074244] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074250] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:26.191 [2024-07-16 01:16:52.074254] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:26.191 [2024-07-16 01:16:52.074259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:26.191 [2024-07-16 01:16:52.074273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:26.191 [2024-07-16 01:16:52.074284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074299] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:26.191 [2024-07-16 01:16:52.074303] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:26.191 [2024-07-16 01:16:52.074308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:26.191 [2024-07-16 01:16:52.074318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:26.191 [2024-07-16 01:16:52.074325] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074331] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074347] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074360] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:26.191 [2024-07-16 01:16:52.074364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:26.191 [2024-07-16 01:16:52.074369] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:26.191 [2024-07-16 01:16:52.074386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:26.191 [2024-07-16 01:16:52.074395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:26.191 [2024-07-16 01:16:52.074405] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:26.191 [2024-07-16 01:16:52.074414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:26.191 [2024-07-16 01:16:52.074423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:26.192 [2024-07-16 01:16:52.074433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:26.192 [2024-07-16 01:16:52.074442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:26.192 [2024-07-16 01:16:52.074452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:26.192 [2024-07-16 01:16:52.074464] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:26.192 [2024-07-16 01:16:52.074468] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:26.192 [2024-07-16 01:16:52.074471] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:26.192 [2024-07-16 01:16:52.074475] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:26.192 [2024-07-16 01:16:52.074481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:26.192 [2024-07-16 01:16:52.074487] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:26.192 [2024-07-16 01:16:52.074491] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:26.192 [2024-07-16 01:16:52.074496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:26.192 [2024-07-16 01:16:52.074502] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:26.192 [2024-07-16 01:16:52.074506] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:26.192 [2024-07-16 01:16:52.074511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:26.192 [2024-07-16 01:16:52.074518] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:26.192 [2024-07-16 01:16:52.074521] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:26.192 [2024-07-16 01:16:52.074527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:26.192 [2024-07-16 01:16:52.074532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:26.192 [2024-07-16 01:16:52.074543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:26.192 [2024-07-16 01:16:52.074554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:26.192 [2024-07-16 01:16:52.074561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:26.192 ===================================================== 00:11:26.192 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:26.192 ===================================================== 00:11:26.192 Controller Capabilities/Features 00:11:26.192 ================================ 00:11:26.192 Vendor ID: 4e58 00:11:26.192 Subsystem Vendor ID: 4e58 00:11:26.192 Serial Number: SPDK1 00:11:26.192 Model Number: SPDK bdev Controller 00:11:26.192 Firmware Version: 24.09 00:11:26.192 Recommended Arb Burst: 6 00:11:26.192 IEEE OUI Identifier: 8d 6b 50 00:11:26.192 Multi-path I/O 00:11:26.192 May have multiple subsystem ports: Yes 00:11:26.192 May have multiple controllers: Yes 00:11:26.192 Associated with SR-IOV VF: No 00:11:26.192 Max Data Transfer Size: 131072 00:11:26.192 Max Number of Namespaces: 32 00:11:26.192 Max Number of I/O Queues: 127 00:11:26.192 NVMe Specification Version (VS): 1.3 00:11:26.192 NVMe Specification Version (Identify): 1.3 00:11:26.192 Maximum Queue Entries: 256 00:11:26.192 Contiguous Queues Required: Yes 00:11:26.192 Arbitration Mechanisms Supported 00:11:26.192 Weighted Round Robin: Not Supported 00:11:26.192 Vendor Specific: Not Supported 00:11:26.192 Reset Timeout: 15000 ms 00:11:26.192 Doorbell Stride: 4 bytes 00:11:26.192 NVM Subsystem Reset: Not Supported 00:11:26.192 Command Sets Supported 00:11:26.192 NVM Command Set: Supported 00:11:26.192 Boot Partition: Not Supported 00:11:26.192 Memory Page Size Minimum: 4096 bytes 00:11:26.192 Memory Page Size Maximum: 4096 bytes 00:11:26.192 Persistent Memory Region: Not Supported 00:11:26.192 Optional Asynchronous Events Supported 00:11:26.192 Namespace Attribute Notices: Supported 00:11:26.192 Firmware Activation Notices: Not Supported 00:11:26.192 ANA Change Notices: Not Supported 00:11:26.192 PLE Aggregate Log Change Notices: Not Supported 00:11:26.192 LBA Status Info Alert Notices: Not Supported 00:11:26.192 EGE Aggregate Log Change Notices: Not Supported 00:11:26.192 Normal NVM Subsystem Shutdown event: Not Supported 00:11:26.192 Zone Descriptor Change Notices: Not Supported 00:11:26.192 Discovery Log Change Notices: Not Supported 00:11:26.192 Controller Attributes 00:11:26.192 128-bit Host Identifier: Supported 00:11:26.192 Non-Operational Permissive Mode: Not Supported 00:11:26.192 NVM Sets: Not Supported 00:11:26.192 Read Recovery Levels: Not Supported 00:11:26.192 Endurance Groups: Not Supported 00:11:26.192 Predictable Latency Mode: Not Supported 00:11:26.192 Traffic Based Keep ALive: Not Supported 00:11:26.192 Namespace Granularity: Not Supported 00:11:26.192 SQ Associations: Not Supported 00:11:26.192 UUID List: Not Supported 00:11:26.192 Multi-Domain Subsystem: Not Supported 00:11:26.192 Fixed Capacity Management: Not Supported 00:11:26.192 Variable Capacity Management: Not Supported 00:11:26.192 Delete Endurance Group: Not Supported 00:11:26.192 Delete NVM Set: Not Supported 00:11:26.192 Extended LBA Formats Supported: Not Supported 00:11:26.192 Flexible Data Placement Supported: Not Supported 00:11:26.192 00:11:26.192 Controller Memory Buffer Support 00:11:26.192 ================================ 00:11:26.192 Supported: No 00:11:26.192 00:11:26.192 Persistent Memory Region Support 00:11:26.192 ================================ 00:11:26.192 Supported: No 00:11:26.192 00:11:26.192 Admin Command Set Attributes 00:11:26.192 ============================ 00:11:26.192 Security Send/Receive: Not Supported 00:11:26.192 Format NVM: Not Supported 00:11:26.192 Firmware Activate/Download: Not Supported 00:11:26.192 Namespace Management: Not Supported 00:11:26.192 Device Self-Test: Not Supported 00:11:26.192 Directives: Not Supported 00:11:26.192 NVMe-MI: Not Supported 00:11:26.192 Virtualization Management: Not Supported 00:11:26.192 Doorbell Buffer Config: Not Supported 00:11:26.192 Get LBA Status Capability: Not Supported 00:11:26.192 Command & Feature Lockdown Capability: Not Supported 00:11:26.192 Abort Command Limit: 4 00:11:26.192 Async Event Request Limit: 4 00:11:26.192 Number of Firmware Slots: N/A 00:11:26.192 Firmware Slot 1 Read-Only: N/A 00:11:26.192 Firmware Activation Without Reset: N/A 00:11:26.192 Multiple Update Detection Support: N/A 00:11:26.192 Firmware Update Granularity: No Information Provided 00:11:26.192 Per-Namespace SMART Log: No 00:11:26.192 Asymmetric Namespace Access Log Page: Not Supported 00:11:26.192 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:26.192 Command Effects Log Page: Supported 00:11:26.192 Get Log Page Extended Data: Supported 00:11:26.192 Telemetry Log Pages: Not Supported 00:11:26.192 Persistent Event Log Pages: Not Supported 00:11:26.192 Supported Log Pages Log Page: May Support 00:11:26.192 Commands Supported & Effects Log Page: Not Supported 00:11:26.192 Feature Identifiers & Effects Log Page:May Support 00:11:26.192 NVMe-MI Commands & Effects Log Page: May Support 00:11:26.192 Data Area 4 for Telemetry Log: Not Supported 00:11:26.192 Error Log Page Entries Supported: 128 00:11:26.192 Keep Alive: Supported 00:11:26.192 Keep Alive Granularity: 10000 ms 00:11:26.192 00:11:26.192 NVM Command Set Attributes 00:11:26.192 ========================== 00:11:26.192 Submission Queue Entry Size 00:11:26.192 Max: 64 00:11:26.192 Min: 64 00:11:26.192 Completion Queue Entry Size 00:11:26.192 Max: 16 00:11:26.192 Min: 16 00:11:26.192 Number of Namespaces: 32 00:11:26.192 Compare Command: Supported 00:11:26.192 Write Uncorrectable Command: Not Supported 00:11:26.192 Dataset Management Command: Supported 00:11:26.192 Write Zeroes Command: Supported 00:11:26.192 Set Features Save Field: Not Supported 00:11:26.192 Reservations: Not Supported 00:11:26.192 Timestamp: Not Supported 00:11:26.192 Copy: Supported 00:11:26.192 Volatile Write Cache: Present 00:11:26.192 Atomic Write Unit (Normal): 1 00:11:26.192 Atomic Write Unit (PFail): 1 00:11:26.192 Atomic Compare & Write Unit: 1 00:11:26.192 Fused Compare & Write: Supported 00:11:26.192 Scatter-Gather List 00:11:26.192 SGL Command Set: Supported (Dword aligned) 00:11:26.192 SGL Keyed: Not Supported 00:11:26.192 SGL Bit Bucket Descriptor: Not Supported 00:11:26.192 SGL Metadata Pointer: Not Supported 00:11:26.192 Oversized SGL: Not Supported 00:11:26.192 SGL Metadata Address: Not Supported 00:11:26.192 SGL Offset: Not Supported 00:11:26.192 Transport SGL Data Block: Not Supported 00:11:26.192 Replay Protected Memory Block: Not Supported 00:11:26.192 00:11:26.192 Firmware Slot Information 00:11:26.192 ========================= 00:11:26.192 Active slot: 1 00:11:26.192 Slot 1 Firmware Revision: 24.09 00:11:26.192 00:11:26.192 00:11:26.192 Commands Supported and Effects 00:11:26.192 ============================== 00:11:26.192 Admin Commands 00:11:26.192 -------------- 00:11:26.192 Get Log Page (02h): Supported 00:11:26.192 Identify (06h): Supported 00:11:26.192 Abort (08h): Supported 00:11:26.192 Set Features (09h): Supported 00:11:26.192 Get Features (0Ah): Supported 00:11:26.192 Asynchronous Event Request (0Ch): Supported 00:11:26.192 Keep Alive (18h): Supported 00:11:26.192 I/O Commands 00:11:26.192 ------------ 00:11:26.192 Flush (00h): Supported LBA-Change 00:11:26.192 Write (01h): Supported LBA-Change 00:11:26.192 Read (02h): Supported 00:11:26.193 Compare (05h): Supported 00:11:26.193 Write Zeroes (08h): Supported LBA-Change 00:11:26.193 Dataset Management (09h): Supported LBA-Change 00:11:26.193 Copy (19h): Supported LBA-Change 00:11:26.193 00:11:26.193 Error Log 00:11:26.193 ========= 00:11:26.193 00:11:26.193 Arbitration 00:11:26.193 =========== 00:11:26.193 Arbitration Burst: 1 00:11:26.193 00:11:26.193 Power Management 00:11:26.193 ================ 00:11:26.193 Number of Power States: 1 00:11:26.193 Current Power State: Power State #0 00:11:26.193 Power State #0: 00:11:26.193 Max Power: 0.00 W 00:11:26.193 Non-Operational State: Operational 00:11:26.193 Entry Latency: Not Reported 00:11:26.193 Exit Latency: Not Reported 00:11:26.193 Relative Read Throughput: 0 00:11:26.193 Relative Read Latency: 0 00:11:26.193 Relative Write Throughput: 0 00:11:26.193 Relative Write Latency: 0 00:11:26.193 Idle Power: Not Reported 00:11:26.193 Active Power: Not Reported 00:11:26.193 Non-Operational Permissive Mode: Not Supported 00:11:26.193 00:11:26.193 Health Information 00:11:26.193 ================== 00:11:26.193 Critical Warnings: 00:11:26.193 Available Spare Space: OK 00:11:26.193 Temperature: OK 00:11:26.193 Device Reliability: OK 00:11:26.193 Read Only: No 00:11:26.193 Volatile Memory Backup: OK 00:11:26.193 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:26.193 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:26.193 Available Spare: 0% 00:11:26.193 Available Sp[2024-07-16 01:16:52.074644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:26.193 [2024-07-16 01:16:52.074651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:26.193 [2024-07-16 01:16:52.074673] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:26.193 [2024-07-16 01:16:52.074682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.193 [2024-07-16 01:16:52.074688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.193 [2024-07-16 01:16:52.074693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.193 [2024-07-16 01:16:52.074698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.193 [2024-07-16 01:16:52.078343] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:26.193 [2024-07-16 01:16:52.078354] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:26.193 [2024-07-16 01:16:52.078887] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:26.193 [2024-07-16 01:16:52.078930] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:26.193 [2024-07-16 01:16:52.078937] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:26.193 [2024-07-16 01:16:52.079894] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:26.193 [2024-07-16 01:16:52.079905] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:26.193 [2024-07-16 01:16:52.079954] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:26.193 [2024-07-16 01:16:52.081921] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:26.193 are Threshold: 0% 00:11:26.193 Life Percentage Used: 0% 00:11:26.193 Data Units Read: 0 00:11:26.193 Data Units Written: 0 00:11:26.193 Host Read Commands: 0 00:11:26.193 Host Write Commands: 0 00:11:26.193 Controller Busy Time: 0 minutes 00:11:26.193 Power Cycles: 0 00:11:26.193 Power On Hours: 0 hours 00:11:26.193 Unsafe Shutdowns: 0 00:11:26.193 Unrecoverable Media Errors: 0 00:11:26.193 Lifetime Error Log Entries: 0 00:11:26.193 Warning Temperature Time: 0 minutes 00:11:26.193 Critical Temperature Time: 0 minutes 00:11:26.193 00:11:26.193 Number of Queues 00:11:26.193 ================ 00:11:26.193 Number of I/O Submission Queues: 127 00:11:26.193 Number of I/O Completion Queues: 127 00:11:26.193 00:11:26.193 Active Namespaces 00:11:26.193 ================= 00:11:26.193 Namespace ID:1 00:11:26.193 Error Recovery Timeout: Unlimited 00:11:26.193 Command Set Identifier: NVM (00h) 00:11:26.193 Deallocate: Supported 00:11:26.193 Deallocated/Unwritten Error: Not Supported 00:11:26.193 Deallocated Read Value: Unknown 00:11:26.193 Deallocate in Write Zeroes: Not Supported 00:11:26.193 Deallocated Guard Field: 0xFFFF 00:11:26.193 Flush: Supported 00:11:26.193 Reservation: Supported 00:11:26.193 Namespace Sharing Capabilities: Multiple Controllers 00:11:26.193 Size (in LBAs): 131072 (0GiB) 00:11:26.193 Capacity (in LBAs): 131072 (0GiB) 00:11:26.193 Utilization (in LBAs): 131072 (0GiB) 00:11:26.193 NGUID: E4C6C5F84D4C46AC8A2FBDE432FF3CDF 00:11:26.193 UUID: e4c6c5f8-4d4c-46ac-8a2f-bde432ff3cdf 00:11:26.193 Thin Provisioning: Not Supported 00:11:26.193 Per-NS Atomic Units: Yes 00:11:26.193 Atomic Boundary Size (Normal): 0 00:11:26.193 Atomic Boundary Size (PFail): 0 00:11:26.193 Atomic Boundary Offset: 0 00:11:26.193 Maximum Single Source Range Length: 65535 00:11:26.193 Maximum Copy Length: 65535 00:11:26.193 Maximum Source Range Count: 1 00:11:26.193 NGUID/EUI64 Never Reused: No 00:11:26.193 Namespace Write Protected: No 00:11:26.193 Number of LBA Formats: 1 00:11:26.193 Current LBA Format: LBA Format #00 00:11:26.193 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:26.193 00:11:26.193 01:16:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:26.193 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.451 [2024-07-16 01:16:52.300121] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:31.717 Initializing NVMe Controllers 00:11:31.717 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:31.717 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:31.717 Initialization complete. Launching workers. 00:11:31.717 ======================================================== 00:11:31.717 Latency(us) 00:11:31.717 Device Information : IOPS MiB/s Average min max 00:11:31.717 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39925.77 155.96 3205.55 941.41 9413.52 00:11:31.717 ======================================================== 00:11:31.717 Total : 39925.77 155.96 3205.55 941.41 9413.52 00:11:31.717 00:11:31.717 [2024-07-16 01:16:57.320693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:31.717 01:16:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:31.717 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.717 [2024-07-16 01:16:57.547723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:36.983 Initializing NVMe Controllers 00:11:36.983 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:36.983 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:36.983 Initialization complete. Launching workers. 00:11:36.983 ======================================================== 00:11:36.983 Latency(us) 00:11:36.983 Device Information : IOPS MiB/s Average min max 00:11:36.983 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.57 4984.69 12006.78 00:11:36.983 ======================================================== 00:11:36.983 Total : 16051.20 62.70 7984.57 4984.69 12006.78 00:11:36.983 00:11:36.983 [2024-07-16 01:17:02.588742] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:36.983 01:17:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:36.983 EAL: No free 2048 kB hugepages reported on node 1 00:11:36.983 [2024-07-16 01:17:02.786707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:42.297 [2024-07-16 01:17:07.863640] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:42.297 Initializing NVMe Controllers 00:11:42.297 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:42.297 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:42.297 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:42.297 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:42.297 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:42.297 Initialization complete. Launching workers. 00:11:42.297 Starting thread on core 2 00:11:42.297 Starting thread on core 3 00:11:42.297 Starting thread on core 1 00:11:42.297 01:17:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:42.297 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.297 [2024-07-16 01:17:08.147840] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:45.584 [2024-07-16 01:17:11.211669] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:45.584 Initializing NVMe Controllers 00:11:45.584 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:45.584 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:45.584 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:45.584 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:45.584 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:45.584 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:45.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:45.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:45.584 Initialization complete. Launching workers. 00:11:45.584 Starting thread on core 1 with urgent priority queue 00:11:45.584 Starting thread on core 2 with urgent priority queue 00:11:45.584 Starting thread on core 3 with urgent priority queue 00:11:45.585 Starting thread on core 0 with urgent priority queue 00:11:45.585 SPDK bdev Controller (SPDK1 ) core 0: 5970.33 IO/s 16.75 secs/100000 ios 00:11:45.585 SPDK bdev Controller (SPDK1 ) core 1: 5692.00 IO/s 17.57 secs/100000 ios 00:11:45.585 SPDK bdev Controller (SPDK1 ) core 2: 5943.00 IO/s 16.83 secs/100000 ios 00:11:45.585 SPDK bdev Controller (SPDK1 ) core 3: 5033.67 IO/s 19.87 secs/100000 ios 00:11:45.585 ======================================================== 00:11:45.585 00:11:45.585 01:17:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:45.585 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.585 [2024-07-16 01:17:11.480843] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:45.585 Initializing NVMe Controllers 00:11:45.585 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:45.585 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:45.585 Namespace ID: 1 size: 0GB 00:11:45.585 Initialization complete. 00:11:45.585 INFO: using host memory buffer for IO 00:11:45.585 Hello world! 00:11:45.585 [2024-07-16 01:17:11.514044] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:45.585 01:17:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:45.843 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.843 [2024-07-16 01:17:11.784010] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:47.220 Initializing NVMe Controllers 00:11:47.220 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:47.220 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:47.220 Initialization complete. Launching workers. 00:11:47.220 submit (in ns) avg, min, max = 7033.6, 3212.4, 3999355.2 00:11:47.220 complete (in ns) avg, min, max = 18940.3, 1710.5, 5991369.5 00:11:47.220 00:11:47.220 Submit histogram 00:11:47.220 ================ 00:11:47.220 Range in us Cumulative Count 00:11:47.220 3.200 - 3.215: 0.0060% ( 1) 00:11:47.220 3.230 - 3.246: 0.0958% ( 15) 00:11:47.220 3.246 - 3.261: 0.2994% ( 34) 00:11:47.220 3.261 - 3.276: 0.7485% ( 75) 00:11:47.220 3.276 - 3.291: 2.0837% ( 223) 00:11:47.220 3.291 - 3.307: 6.2032% ( 688) 00:11:47.220 3.307 - 3.322: 11.7538% ( 927) 00:11:47.220 3.322 - 3.337: 17.6936% ( 992) 00:11:47.220 3.337 - 3.352: 24.4177% ( 1123) 00:11:47.220 3.352 - 3.368: 30.5191% ( 1019) 00:11:47.220 3.368 - 3.383: 36.4769% ( 995) 00:11:47.220 3.383 - 3.398: 42.4525% ( 998) 00:11:47.220 3.398 - 3.413: 48.3983% ( 993) 00:11:47.220 3.413 - 3.429: 53.9070% ( 920) 00:11:47.220 3.429 - 3.444: 59.7629% ( 978) 00:11:47.220 3.444 - 3.459: 67.5588% ( 1302) 00:11:47.220 3.459 - 3.474: 72.8040% ( 876) 00:11:47.220 3.474 - 3.490: 78.1390% ( 891) 00:11:47.220 3.490 - 3.505: 82.0370% ( 651) 00:11:47.220 3.505 - 3.520: 84.7075% ( 446) 00:11:47.220 3.520 - 3.535: 86.4559% ( 292) 00:11:47.220 3.535 - 3.550: 87.2044% ( 125) 00:11:47.220 3.550 - 3.566: 87.8391% ( 106) 00:11:47.220 3.566 - 3.581: 88.2642% ( 71) 00:11:47.220 3.581 - 3.596: 88.8150% ( 92) 00:11:47.220 3.596 - 3.611: 89.3899% ( 96) 00:11:47.220 3.611 - 3.627: 90.3179% ( 155) 00:11:47.220 3.627 - 3.642: 91.1981% ( 147) 00:11:47.220 3.642 - 3.657: 92.0903% ( 149) 00:11:47.220 3.657 - 3.672: 93.0723% ( 164) 00:11:47.220 3.672 - 3.688: 93.9824% ( 152) 00:11:47.220 3.688 - 3.703: 95.0302% ( 175) 00:11:47.220 3.703 - 3.718: 96.0541% ( 171) 00:11:47.220 3.718 - 3.733: 96.8864% ( 139) 00:11:47.220 3.733 - 3.749: 97.5391% ( 109) 00:11:47.220 3.749 - 3.764: 98.1977% ( 110) 00:11:47.220 3.764 - 3.779: 98.6588% ( 77) 00:11:47.220 3.779 - 3.794: 99.0001% ( 57) 00:11:47.220 3.794 - 3.810: 99.2096% ( 35) 00:11:47.220 3.810 - 3.825: 99.3653% ( 26) 00:11:47.220 3.825 - 3.840: 99.4851% ( 20) 00:11:47.220 3.840 - 3.855: 99.5449% ( 10) 00:11:47.220 3.855 - 3.870: 99.5689% ( 4) 00:11:47.220 3.870 - 3.886: 99.5869% ( 3) 00:11:47.220 3.886 - 3.901: 99.6048% ( 3) 00:11:47.220 3.901 - 3.931: 99.6288% ( 4) 00:11:47.220 3.962 - 3.992: 99.6348% ( 1) 00:11:47.220 5.059 - 5.090: 99.6407% ( 1) 00:11:47.220 5.090 - 5.120: 99.6527% ( 2) 00:11:47.220 5.150 - 5.181: 99.6587% ( 1) 00:11:47.220 5.242 - 5.272: 99.6647% ( 1) 00:11:47.220 5.272 - 5.303: 99.6707% ( 1) 00:11:47.220 5.425 - 5.455: 99.6827% ( 2) 00:11:47.220 5.577 - 5.608: 99.6886% ( 1) 00:11:47.220 5.608 - 5.638: 99.7006% ( 2) 00:11:47.220 5.669 - 5.699: 99.7126% ( 2) 00:11:47.220 5.730 - 5.760: 99.7186% ( 1) 00:11:47.220 5.790 - 5.821: 99.7306% ( 2) 00:11:47.220 5.912 - 5.943: 99.7365% ( 1) 00:11:47.220 6.034 - 6.065: 99.7425% ( 1) 00:11:47.220 6.065 - 6.095: 99.7485% ( 1) 00:11:47.220 6.217 - 6.248: 99.7545% ( 1) 00:11:47.220 6.248 - 6.278: 99.7665% ( 2) 00:11:47.220 6.309 - 6.339: 99.7785% ( 2) 00:11:47.220 6.339 - 6.370: 99.7844% ( 1) 00:11:47.220 6.491 - 6.522: 99.7904% ( 1) 00:11:47.220 6.766 - 6.796: 99.8024% ( 2) 00:11:47.220 6.827 - 6.857: 99.8084% ( 1) 00:11:47.220 6.979 - 7.010: 99.8323% ( 4) 00:11:47.220 7.375 - 7.406: 99.8443% ( 2) 00:11:47.220 7.497 - 7.528: 99.8503% ( 1) 00:11:47.220 7.558 - 7.589: 99.8683% ( 3) 00:11:47.220 7.589 - 7.619: 99.8743% ( 1) 00:11:47.220 7.924 - 7.985: 99.8802% ( 1) 00:11:47.220 8.107 - 8.168: 99.8862% ( 1) 00:11:47.220 9.204 - 9.265: 99.8922% ( 1) 00:11:47.220 11.398 - 11.459: 99.8982% ( 1) 00:11:47.220 14.568 - 14.629: 99.9042% ( 1) 00:11:47.220 19.017 - 19.139: 99.9102% ( 1) 00:11:47.220 [2024-07-16 01:17:12.803761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:47.221 3994.575 - 4025.783: 100.0000% ( 15) 00:11:47.221 00:11:47.221 Complete histogram 00:11:47.221 ================== 00:11:47.221 Range in us Cumulative Count 00:11:47.221 1.707 - 1.714: 0.0060% ( 1) 00:11:47.221 1.714 - 1.722: 0.0180% ( 2) 00:11:47.221 1.722 - 1.730: 0.0299% ( 2) 00:11:47.221 1.730 - 1.737: 0.0359% ( 1) 00:11:47.221 1.737 - 1.745: 0.0419% ( 1) 00:11:47.221 1.752 - 1.760: 0.2036% ( 27) 00:11:47.221 1.760 - 1.768: 4.4249% ( 705) 00:11:47.221 1.768 - 1.775: 20.3700% ( 2663) 00:11:47.221 1.775 - 1.783: 35.8182% ( 2580) 00:11:47.221 1.783 - 1.790: 40.5365% ( 788) 00:11:47.221 1.790 - 1.798: 42.0095% ( 246) 00:11:47.221 1.798 - 1.806: 43.1411% ( 189) 00:11:47.221 1.806 - 1.813: 43.8537% ( 119) 00:11:47.221 1.813 - 1.821: 44.5542% ( 117) 00:11:47.221 1.821 - 1.829: 50.4581% ( 986) 00:11:47.221 1.829 - 1.836: 68.0139% ( 2932) 00:11:47.221 1.836 - 1.844: 85.3721% ( 2899) 00:11:47.221 1.844 - 1.851: 92.2939% ( 1156) 00:11:47.221 1.851 - 1.859: 94.7907% ( 417) 00:11:47.221 1.859 - 1.867: 96.3116% ( 254) 00:11:47.221 1.867 - 1.874: 97.0720% ( 127) 00:11:47.221 1.874 - 1.882: 97.4493% ( 63) 00:11:47.221 1.882 - 1.890: 97.6708% ( 37) 00:11:47.221 1.890 - 1.897: 97.9462% ( 46) 00:11:47.221 1.897 - 1.905: 98.2277% ( 47) 00:11:47.221 1.905 - 1.912: 98.6288% ( 67) 00:11:47.221 1.912 - 1.920: 98.9222% ( 49) 00:11:47.221 1.920 - 1.928: 99.1677% ( 41) 00:11:47.221 1.928 - 1.935: 99.2515% ( 14) 00:11:47.221 1.935 - 1.943: 99.3174% ( 11) 00:11:47.221 1.943 - 1.950: 99.3473% ( 5) 00:11:47.221 1.950 - 1.966: 99.3773% ( 5) 00:11:47.221 1.981 - 1.996: 99.3893% ( 2) 00:11:47.221 1.996 - 2.011: 99.4012% ( 2) 00:11:47.221 2.042 - 2.057: 99.4072% ( 1) 00:11:47.221 2.240 - 2.255: 99.4132% ( 1) 00:11:47.221 3.170 - 3.185: 99.4192% ( 1) 00:11:47.221 3.520 - 3.535: 99.4252% ( 1) 00:11:47.221 3.688 - 3.703: 99.4312% ( 1) 00:11:47.221 3.764 - 3.779: 99.4372% ( 1) 00:11:47.221 4.023 - 4.053: 99.4431% ( 1) 00:11:47.221 4.206 - 4.236: 99.4491% ( 1) 00:11:47.221 4.297 - 4.328: 99.4551% ( 1) 00:11:47.221 4.358 - 4.389: 99.4611% ( 1) 00:11:47.221 4.602 - 4.632: 99.4671% ( 1) 00:11:47.221 4.785 - 4.815: 99.4731% ( 1) 00:11:47.221 4.846 - 4.876: 99.4791% ( 1) 00:11:47.221 4.876 - 4.907: 99.4910% ( 2) 00:11:47.221 5.059 - 5.090: 99.4970% ( 1) 00:11:47.221 5.120 - 5.150: 99.5030% ( 1) 00:11:47.221 5.364 - 5.394: 99.5090% ( 1) 00:11:47.221 5.394 - 5.425: 99.5150% ( 1) 00:11:47.221 5.425 - 5.455: 99.5210% ( 1) 00:11:47.221 5.547 - 5.577: 99.5270% ( 1) 00:11:47.221 5.577 - 5.608: 99.5330% ( 1) 00:11:47.221 5.821 - 5.851: 99.5449% ( 2) 00:11:47.221 6.034 - 6.065: 99.5509% ( 1) 00:11:47.221 8.838 - 8.899: 99.5569% ( 1) 00:11:47.221 51.688 - 51.931: 99.5629% ( 1) 00:11:47.221 143.360 - 144.335: 99.5689% ( 1) 00:11:47.221 1154.682 - 1162.484: 99.5749% ( 1) 00:11:47.221 2995.931 - 3011.535: 99.5809% ( 1) 00:11:47.221 3994.575 - 4025.783: 99.9940% ( 69) 00:11:47.221 5960.655 - 5991.863: 100.0000% ( 1) 00:11:47.221 00:11:47.221 01:17:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:47.221 01:17:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:47.221 01:17:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:47.221 01:17:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:47.221 01:17:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:47.221 [ 00:11:47.221 { 00:11:47.221 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:47.221 "subtype": "Discovery", 00:11:47.221 "listen_addresses": [], 00:11:47.221 "allow_any_host": true, 00:11:47.221 "hosts": [] 00:11:47.221 }, 00:11:47.221 { 00:11:47.221 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:47.221 "subtype": "NVMe", 00:11:47.221 "listen_addresses": [ 00:11:47.221 { 00:11:47.221 "trtype": "VFIOUSER", 00:11:47.221 "adrfam": "IPv4", 00:11:47.221 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:47.221 "trsvcid": "0" 00:11:47.221 } 00:11:47.221 ], 00:11:47.221 "allow_any_host": true, 00:11:47.221 "hosts": [], 00:11:47.221 "serial_number": "SPDK1", 00:11:47.221 "model_number": "SPDK bdev Controller", 00:11:47.221 "max_namespaces": 32, 00:11:47.221 "min_cntlid": 1, 00:11:47.221 "max_cntlid": 65519, 00:11:47.221 "namespaces": [ 00:11:47.221 { 00:11:47.221 "nsid": 1, 00:11:47.221 "bdev_name": "Malloc1", 00:11:47.221 "name": "Malloc1", 00:11:47.221 "nguid": "E4C6C5F84D4C46AC8A2FBDE432FF3CDF", 00:11:47.221 "uuid": "e4c6c5f8-4d4c-46ac-8a2f-bde432ff3cdf" 00:11:47.221 } 00:11:47.221 ] 00:11:47.221 }, 00:11:47.221 { 00:11:47.221 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:47.221 "subtype": "NVMe", 00:11:47.221 "listen_addresses": [ 00:11:47.221 { 00:11:47.221 "trtype": "VFIOUSER", 00:11:47.221 "adrfam": "IPv4", 00:11:47.221 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:47.221 "trsvcid": "0" 00:11:47.221 } 00:11:47.221 ], 00:11:47.221 "allow_any_host": true, 00:11:47.221 "hosts": [], 00:11:47.221 "serial_number": "SPDK2", 00:11:47.221 "model_number": "SPDK bdev Controller", 00:11:47.221 "max_namespaces": 32, 00:11:47.221 "min_cntlid": 1, 00:11:47.221 "max_cntlid": 65519, 00:11:47.221 "namespaces": [ 00:11:47.221 { 00:11:47.221 "nsid": 1, 00:11:47.221 "bdev_name": "Malloc2", 00:11:47.221 "name": "Malloc2", 00:11:47.221 "nguid": "59CE489BD11E4A4CB58DC631170EC291", 00:11:47.221 "uuid": "59ce489b-d11e-4a4c-b58d-c631170ec291" 00:11:47.221 } 00:11:47.221 ] 00:11:47.221 } 00:11:47.221 ] 00:11:47.221 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:47.221 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:47.221 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3311856 00:11:47.221 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:47.221 01:17:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:47.221 01:17:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:47.221 01:17:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:47.221 01:17:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:47.221 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:47.221 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:47.221 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.221 [2024-07-16 01:17:13.174549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:47.221 Malloc3 00:11:47.479 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:47.479 [2024-07-16 01:17:13.376063] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:47.479 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:47.479 Asynchronous Event Request test 00:11:47.479 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:47.479 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:47.479 Registering asynchronous event callbacks... 00:11:47.479 Starting namespace attribute notice tests for all controllers... 00:11:47.479 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:47.479 aer_cb - Changed Namespace 00:11:47.479 Cleaning up... 00:11:47.738 [ 00:11:47.738 { 00:11:47.738 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:47.738 "subtype": "Discovery", 00:11:47.738 "listen_addresses": [], 00:11:47.738 "allow_any_host": true, 00:11:47.738 "hosts": [] 00:11:47.738 }, 00:11:47.738 { 00:11:47.738 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:47.738 "subtype": "NVMe", 00:11:47.738 "listen_addresses": [ 00:11:47.738 { 00:11:47.738 "trtype": "VFIOUSER", 00:11:47.738 "adrfam": "IPv4", 00:11:47.738 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:47.738 "trsvcid": "0" 00:11:47.738 } 00:11:47.738 ], 00:11:47.738 "allow_any_host": true, 00:11:47.738 "hosts": [], 00:11:47.738 "serial_number": "SPDK1", 00:11:47.738 "model_number": "SPDK bdev Controller", 00:11:47.738 "max_namespaces": 32, 00:11:47.738 "min_cntlid": 1, 00:11:47.738 "max_cntlid": 65519, 00:11:47.738 "namespaces": [ 00:11:47.738 { 00:11:47.738 "nsid": 1, 00:11:47.738 "bdev_name": "Malloc1", 00:11:47.738 "name": "Malloc1", 00:11:47.738 "nguid": "E4C6C5F84D4C46AC8A2FBDE432FF3CDF", 00:11:47.738 "uuid": "e4c6c5f8-4d4c-46ac-8a2f-bde432ff3cdf" 00:11:47.738 }, 00:11:47.738 { 00:11:47.738 "nsid": 2, 00:11:47.738 "bdev_name": "Malloc3", 00:11:47.738 "name": "Malloc3", 00:11:47.738 "nguid": "FCCBF553DD8A4D3FAEFCCC8C6227A67E", 00:11:47.738 "uuid": "fccbf553-dd8a-4d3f-aefc-cc8c6227a67e" 00:11:47.738 } 00:11:47.738 ] 00:11:47.738 }, 00:11:47.738 { 00:11:47.738 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:47.738 "subtype": "NVMe", 00:11:47.738 "listen_addresses": [ 00:11:47.738 { 00:11:47.738 "trtype": "VFIOUSER", 00:11:47.738 "adrfam": "IPv4", 00:11:47.738 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:47.738 "trsvcid": "0" 00:11:47.738 } 00:11:47.738 ], 00:11:47.738 "allow_any_host": true, 00:11:47.738 "hosts": [], 00:11:47.738 "serial_number": "SPDK2", 00:11:47.738 "model_number": "SPDK bdev Controller", 00:11:47.738 "max_namespaces": 32, 00:11:47.738 "min_cntlid": 1, 00:11:47.738 "max_cntlid": 65519, 00:11:47.738 "namespaces": [ 00:11:47.738 { 00:11:47.738 "nsid": 1, 00:11:47.738 "bdev_name": "Malloc2", 00:11:47.738 "name": "Malloc2", 00:11:47.738 "nguid": "59CE489BD11E4A4CB58DC631170EC291", 00:11:47.738 "uuid": "59ce489b-d11e-4a4c-b58d-c631170ec291" 00:11:47.738 } 00:11:47.738 ] 00:11:47.738 } 00:11:47.738 ] 00:11:47.738 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3311856 00:11:47.738 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:47.738 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:47.738 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:47.738 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:47.738 [2024-07-16 01:17:13.608196] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:11:47.738 [2024-07-16 01:17:13.608228] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311923 ] 00:11:47.738 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.738 [2024-07-16 01:17:13.637427] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:47.738 [2024-07-16 01:17:13.640041] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:47.738 [2024-07-16 01:17:13.640062] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff54b569000 00:11:47.738 [2024-07-16 01:17:13.641049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:47.738 [2024-07-16 01:17:13.642051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:47.738 [2024-07-16 01:17:13.643056] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:47.738 [2024-07-16 01:17:13.644059] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:47.738 [2024-07-16 01:17:13.645074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:47.738 [2024-07-16 01:17:13.646075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:47.738 [2024-07-16 01:17:13.647075] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:47.738 [2024-07-16 01:17:13.648087] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:47.738 [2024-07-16 01:17:13.649094] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:47.738 [2024-07-16 01:17:13.649104] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff54b55e000 00:11:47.738 [2024-07-16 01:17:13.650132] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:47.738 [2024-07-16 01:17:13.663415] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:47.738 [2024-07-16 01:17:13.663440] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:47.738 [2024-07-16 01:17:13.668520] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:47.738 [2024-07-16 01:17:13.668559] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:47.738 [2024-07-16 01:17:13.668635] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:47.738 [2024-07-16 01:17:13.668651] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:47.738 [2024-07-16 01:17:13.668656] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:47.738 [2024-07-16 01:17:13.669521] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:47.738 [2024-07-16 01:17:13.669531] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:47.738 [2024-07-16 01:17:13.669538] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:47.738 [2024-07-16 01:17:13.670527] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:47.738 [2024-07-16 01:17:13.670536] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:47.738 [2024-07-16 01:17:13.670545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:47.738 [2024-07-16 01:17:13.671540] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:47.738 [2024-07-16 01:17:13.671549] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:47.738 [2024-07-16 01:17:13.672540] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:47.738 [2024-07-16 01:17:13.672550] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:47.738 [2024-07-16 01:17:13.672554] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:47.738 [2024-07-16 01:17:13.672560] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:47.738 [2024-07-16 01:17:13.672665] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:47.739 [2024-07-16 01:17:13.672669] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:47.739 [2024-07-16 01:17:13.672674] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:47.739 [2024-07-16 01:17:13.673551] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:47.739 [2024-07-16 01:17:13.674559] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:47.739 [2024-07-16 01:17:13.675570] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:47.739 [2024-07-16 01:17:13.676572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:47.739 [2024-07-16 01:17:13.676610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:47.739 [2024-07-16 01:17:13.677581] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:47.739 [2024-07-16 01:17:13.677590] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:47.739 [2024-07-16 01:17:13.677594] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.677611] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:47.739 [2024-07-16 01:17:13.677618] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.677630] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:47.739 [2024-07-16 01:17:13.677635] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:47.739 [2024-07-16 01:17:13.677646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:47.739 [2024-07-16 01:17:13.686347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:47.739 [2024-07-16 01:17:13.686362] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:47.739 [2024-07-16 01:17:13.686366] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:47.739 [2024-07-16 01:17:13.686370] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:47.739 [2024-07-16 01:17:13.686374] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:47.739 [2024-07-16 01:17:13.686379] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:47.739 [2024-07-16 01:17:13.686384] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:47.739 [2024-07-16 01:17:13.686388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.686395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.686407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:47.739 [2024-07-16 01:17:13.694344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:47.739 [2024-07-16 01:17:13.694356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.739 [2024-07-16 01:17:13.694364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.739 [2024-07-16 01:17:13.694371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.739 [2024-07-16 01:17:13.694378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.739 [2024-07-16 01:17:13.694383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.694391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.694399] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:47.739 [2024-07-16 01:17:13.702343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:47.739 [2024-07-16 01:17:13.702351] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:47.739 [2024-07-16 01:17:13.702356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.702364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.702369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.702377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:47.739 [2024-07-16 01:17:13.710343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:47.739 [2024-07-16 01:17:13.710398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.710408] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.710415] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:47.739 [2024-07-16 01:17:13.710419] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:47.739 [2024-07-16 01:17:13.710426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:47.739 [2024-07-16 01:17:13.718352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:47.739 [2024-07-16 01:17:13.718363] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:47.739 [2024-07-16 01:17:13.718376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.718383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:47.739 [2024-07-16 01:17:13.718389] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:47.739 [2024-07-16 01:17:13.718394] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:47.739 [2024-07-16 01:17:13.718400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:47.998 [2024-07-16 01:17:13.726344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:47.998 [2024-07-16 01:17:13.726358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:47.998 [2024-07-16 01:17:13.726366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:47.998 [2024-07-16 01:17:13.726373] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:47.998 [2024-07-16 01:17:13.726377] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:47.998 [2024-07-16 01:17:13.726383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:47.998 [2024-07-16 01:17:13.734345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:47.998 [2024-07-16 01:17:13.734354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:47.998 [2024-07-16 01:17:13.734360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:47.998 [2024-07-16 01:17:13.734369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:47.998 [2024-07-16 01:17:13.734375] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:47.998 [2024-07-16 01:17:13.734379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:47.998 [2024-07-16 01:17:13.734384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:47.998 [2024-07-16 01:17:13.734389] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:47.998 [2024-07-16 01:17:13.734393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:47.998 [2024-07-16 01:17:13.734399] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:47.998 [2024-07-16 01:17:13.734415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:47.998 [2024-07-16 01:17:13.742343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:47.998 [2024-07-16 01:17:13.742356] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:47.998 [2024-07-16 01:17:13.749949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:47.998 [2024-07-16 01:17:13.749963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:47.998 [2024-07-16 01:17:13.757343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:47.998 [2024-07-16 01:17:13.757357] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:47.998 [2024-07-16 01:17:13.765343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:47.998 [2024-07-16 01:17:13.765361] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:47.999 [2024-07-16 01:17:13.765366] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:47.999 [2024-07-16 01:17:13.765369] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:47.999 [2024-07-16 01:17:13.765372] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:47.999 [2024-07-16 01:17:13.765378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:47.999 [2024-07-16 01:17:13.765384] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:47.999 [2024-07-16 01:17:13.765388] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:47.999 [2024-07-16 01:17:13.765393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:47.999 [2024-07-16 01:17:13.765400] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:47.999 [2024-07-16 01:17:13.765403] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:47.999 [2024-07-16 01:17:13.765408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:47.999 [2024-07-16 01:17:13.765415] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:47.999 [2024-07-16 01:17:13.765419] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:47.999 [2024-07-16 01:17:13.765425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:47.999 [2024-07-16 01:17:13.773344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:47.999 [2024-07-16 01:17:13.773358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:47.999 [2024-07-16 01:17:13.773367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:47.999 [2024-07-16 01:17:13.773373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:47.999 ===================================================== 00:11:47.999 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:47.999 ===================================================== 00:11:47.999 Controller Capabilities/Features 00:11:47.999 ================================ 00:11:47.999 Vendor ID: 4e58 00:11:47.999 Subsystem Vendor ID: 4e58 00:11:47.999 Serial Number: SPDK2 00:11:47.999 Model Number: SPDK bdev Controller 00:11:47.999 Firmware Version: 24.09 00:11:47.999 Recommended Arb Burst: 6 00:11:47.999 IEEE OUI Identifier: 8d 6b 50 00:11:47.999 Multi-path I/O 00:11:47.999 May have multiple subsystem ports: Yes 00:11:47.999 May have multiple controllers: Yes 00:11:47.999 Associated with SR-IOV VF: No 00:11:47.999 Max Data Transfer Size: 131072 00:11:47.999 Max Number of Namespaces: 32 00:11:47.999 Max Number of I/O Queues: 127 00:11:47.999 NVMe Specification Version (VS): 1.3 00:11:47.999 NVMe Specification Version (Identify): 1.3 00:11:47.999 Maximum Queue Entries: 256 00:11:47.999 Contiguous Queues Required: Yes 00:11:47.999 Arbitration Mechanisms Supported 00:11:47.999 Weighted Round Robin: Not Supported 00:11:47.999 Vendor Specific: Not Supported 00:11:47.999 Reset Timeout: 15000 ms 00:11:47.999 Doorbell Stride: 4 bytes 00:11:47.999 NVM Subsystem Reset: Not Supported 00:11:47.999 Command Sets Supported 00:11:47.999 NVM Command Set: Supported 00:11:47.999 Boot Partition: Not Supported 00:11:47.999 Memory Page Size Minimum: 4096 bytes 00:11:47.999 Memory Page Size Maximum: 4096 bytes 00:11:47.999 Persistent Memory Region: Not Supported 00:11:47.999 Optional Asynchronous Events Supported 00:11:47.999 Namespace Attribute Notices: Supported 00:11:47.999 Firmware Activation Notices: Not Supported 00:11:47.999 ANA Change Notices: Not Supported 00:11:47.999 PLE Aggregate Log Change Notices: Not Supported 00:11:47.999 LBA Status Info Alert Notices: Not Supported 00:11:47.999 EGE Aggregate Log Change Notices: Not Supported 00:11:47.999 Normal NVM Subsystem Shutdown event: Not Supported 00:11:47.999 Zone Descriptor Change Notices: Not Supported 00:11:47.999 Discovery Log Change Notices: Not Supported 00:11:47.999 Controller Attributes 00:11:47.999 128-bit Host Identifier: Supported 00:11:47.999 Non-Operational Permissive Mode: Not Supported 00:11:47.999 NVM Sets: Not Supported 00:11:47.999 Read Recovery Levels: Not Supported 00:11:47.999 Endurance Groups: Not Supported 00:11:47.999 Predictable Latency Mode: Not Supported 00:11:47.999 Traffic Based Keep ALive: Not Supported 00:11:47.999 Namespace Granularity: Not Supported 00:11:47.999 SQ Associations: Not Supported 00:11:47.999 UUID List: Not Supported 00:11:47.999 Multi-Domain Subsystem: Not Supported 00:11:47.999 Fixed Capacity Management: Not Supported 00:11:47.999 Variable Capacity Management: Not Supported 00:11:47.999 Delete Endurance Group: Not Supported 00:11:47.999 Delete NVM Set: Not Supported 00:11:47.999 Extended LBA Formats Supported: Not Supported 00:11:47.999 Flexible Data Placement Supported: Not Supported 00:11:47.999 00:11:47.999 Controller Memory Buffer Support 00:11:47.999 ================================ 00:11:47.999 Supported: No 00:11:47.999 00:11:47.999 Persistent Memory Region Support 00:11:47.999 ================================ 00:11:47.999 Supported: No 00:11:47.999 00:11:47.999 Admin Command Set Attributes 00:11:47.999 ============================ 00:11:47.999 Security Send/Receive: Not Supported 00:11:47.999 Format NVM: Not Supported 00:11:47.999 Firmware Activate/Download: Not Supported 00:11:47.999 Namespace Management: Not Supported 00:11:47.999 Device Self-Test: Not Supported 00:11:47.999 Directives: Not Supported 00:11:47.999 NVMe-MI: Not Supported 00:11:47.999 Virtualization Management: Not Supported 00:11:47.999 Doorbell Buffer Config: Not Supported 00:11:47.999 Get LBA Status Capability: Not Supported 00:11:47.999 Command & Feature Lockdown Capability: Not Supported 00:11:47.999 Abort Command Limit: 4 00:11:47.999 Async Event Request Limit: 4 00:11:47.999 Number of Firmware Slots: N/A 00:11:47.999 Firmware Slot 1 Read-Only: N/A 00:11:47.999 Firmware Activation Without Reset: N/A 00:11:47.999 Multiple Update Detection Support: N/A 00:11:47.999 Firmware Update Granularity: No Information Provided 00:11:47.999 Per-Namespace SMART Log: No 00:11:47.999 Asymmetric Namespace Access Log Page: Not Supported 00:11:47.999 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:47.999 Command Effects Log Page: Supported 00:11:47.999 Get Log Page Extended Data: Supported 00:11:47.999 Telemetry Log Pages: Not Supported 00:11:47.999 Persistent Event Log Pages: Not Supported 00:11:47.999 Supported Log Pages Log Page: May Support 00:11:47.999 Commands Supported & Effects Log Page: Not Supported 00:11:47.999 Feature Identifiers & Effects Log Page:May Support 00:11:47.999 NVMe-MI Commands & Effects Log Page: May Support 00:11:47.999 Data Area 4 for Telemetry Log: Not Supported 00:11:47.999 Error Log Page Entries Supported: 128 00:11:47.999 Keep Alive: Supported 00:11:47.999 Keep Alive Granularity: 10000 ms 00:11:47.999 00:11:47.999 NVM Command Set Attributes 00:11:47.999 ========================== 00:11:47.999 Submission Queue Entry Size 00:11:47.999 Max: 64 00:11:47.999 Min: 64 00:11:47.999 Completion Queue Entry Size 00:11:47.999 Max: 16 00:11:47.999 Min: 16 00:11:47.999 Number of Namespaces: 32 00:11:47.999 Compare Command: Supported 00:11:47.999 Write Uncorrectable Command: Not Supported 00:11:47.999 Dataset Management Command: Supported 00:11:47.999 Write Zeroes Command: Supported 00:11:47.999 Set Features Save Field: Not Supported 00:11:47.999 Reservations: Not Supported 00:11:47.999 Timestamp: Not Supported 00:11:47.999 Copy: Supported 00:11:47.999 Volatile Write Cache: Present 00:11:47.999 Atomic Write Unit (Normal): 1 00:11:47.999 Atomic Write Unit (PFail): 1 00:11:47.999 Atomic Compare & Write Unit: 1 00:11:47.999 Fused Compare & Write: Supported 00:11:47.999 Scatter-Gather List 00:11:47.999 SGL Command Set: Supported (Dword aligned) 00:11:47.999 SGL Keyed: Not Supported 00:11:47.999 SGL Bit Bucket Descriptor: Not Supported 00:11:47.999 SGL Metadata Pointer: Not Supported 00:11:47.999 Oversized SGL: Not Supported 00:11:47.999 SGL Metadata Address: Not Supported 00:11:47.999 SGL Offset: Not Supported 00:11:47.999 Transport SGL Data Block: Not Supported 00:11:47.999 Replay Protected Memory Block: Not Supported 00:11:47.999 00:11:47.999 Firmware Slot Information 00:11:47.999 ========================= 00:11:47.999 Active slot: 1 00:11:47.999 Slot 1 Firmware Revision: 24.09 00:11:47.999 00:11:47.999 00:11:47.999 Commands Supported and Effects 00:11:47.999 ============================== 00:11:47.999 Admin Commands 00:11:47.999 -------------- 00:11:47.999 Get Log Page (02h): Supported 00:11:47.999 Identify (06h): Supported 00:11:47.999 Abort (08h): Supported 00:11:47.999 Set Features (09h): Supported 00:11:47.999 Get Features (0Ah): Supported 00:11:47.999 Asynchronous Event Request (0Ch): Supported 00:11:47.999 Keep Alive (18h): Supported 00:11:47.999 I/O Commands 00:11:47.999 ------------ 00:11:47.999 Flush (00h): Supported LBA-Change 00:11:47.999 Write (01h): Supported LBA-Change 00:11:47.999 Read (02h): Supported 00:11:47.999 Compare (05h): Supported 00:11:47.999 Write Zeroes (08h): Supported LBA-Change 00:11:47.999 Dataset Management (09h): Supported LBA-Change 00:11:47.999 Copy (19h): Supported LBA-Change 00:11:47.999 00:11:47.999 Error Log 00:11:47.999 ========= 00:11:47.999 00:11:47.999 Arbitration 00:11:47.999 =========== 00:11:47.999 Arbitration Burst: 1 00:11:48.000 00:11:48.000 Power Management 00:11:48.000 ================ 00:11:48.000 Number of Power States: 1 00:11:48.000 Current Power State: Power State #0 00:11:48.000 Power State #0: 00:11:48.000 Max Power: 0.00 W 00:11:48.000 Non-Operational State: Operational 00:11:48.000 Entry Latency: Not Reported 00:11:48.000 Exit Latency: Not Reported 00:11:48.000 Relative Read Throughput: 0 00:11:48.000 Relative Read Latency: 0 00:11:48.000 Relative Write Throughput: 0 00:11:48.000 Relative Write Latency: 0 00:11:48.000 Idle Power: Not Reported 00:11:48.000 Active Power: Not Reported 00:11:48.000 Non-Operational Permissive Mode: Not Supported 00:11:48.000 00:11:48.000 Health Information 00:11:48.000 ================== 00:11:48.000 Critical Warnings: 00:11:48.000 Available Spare Space: OK 00:11:48.000 Temperature: OK 00:11:48.000 Device Reliability: OK 00:11:48.000 Read Only: No 00:11:48.000 Volatile Memory Backup: OK 00:11:48.000 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:48.000 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:48.000 Available Spare: 0% 00:11:48.000 Available Sp[2024-07-16 01:17:13.773459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:48.000 [2024-07-16 01:17:13.781344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:48.000 [2024-07-16 01:17:13.781372] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:48.000 [2024-07-16 01:17:13.781381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.000 [2024-07-16 01:17:13.781386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.000 [2024-07-16 01:17:13.781392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.000 [2024-07-16 01:17:13.781397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.000 [2024-07-16 01:17:13.781440] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:48.000 [2024-07-16 01:17:13.781450] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:48.000 [2024-07-16 01:17:13.782447] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:48.000 [2024-07-16 01:17:13.782490] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:48.000 [2024-07-16 01:17:13.782496] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:48.000 [2024-07-16 01:17:13.783452] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:48.000 [2024-07-16 01:17:13.783464] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:48.000 [2024-07-16 01:17:13.783512] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:48.000 [2024-07-16 01:17:13.784584] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:48.000 are Threshold: 0% 00:11:48.000 Life Percentage Used: 0% 00:11:48.000 Data Units Read: 0 00:11:48.000 Data Units Written: 0 00:11:48.000 Host Read Commands: 0 00:11:48.000 Host Write Commands: 0 00:11:48.000 Controller Busy Time: 0 minutes 00:11:48.000 Power Cycles: 0 00:11:48.000 Power On Hours: 0 hours 00:11:48.000 Unsafe Shutdowns: 0 00:11:48.000 Unrecoverable Media Errors: 0 00:11:48.000 Lifetime Error Log Entries: 0 00:11:48.000 Warning Temperature Time: 0 minutes 00:11:48.000 Critical Temperature Time: 0 minutes 00:11:48.000 00:11:48.000 Number of Queues 00:11:48.000 ================ 00:11:48.000 Number of I/O Submission Queues: 127 00:11:48.000 Number of I/O Completion Queues: 127 00:11:48.000 00:11:48.000 Active Namespaces 00:11:48.000 ================= 00:11:48.000 Namespace ID:1 00:11:48.000 Error Recovery Timeout: Unlimited 00:11:48.000 Command Set Identifier: NVM (00h) 00:11:48.000 Deallocate: Supported 00:11:48.000 Deallocated/Unwritten Error: Not Supported 00:11:48.000 Deallocated Read Value: Unknown 00:11:48.000 Deallocate in Write Zeroes: Not Supported 00:11:48.000 Deallocated Guard Field: 0xFFFF 00:11:48.000 Flush: Supported 00:11:48.000 Reservation: Supported 00:11:48.000 Namespace Sharing Capabilities: Multiple Controllers 00:11:48.000 Size (in LBAs): 131072 (0GiB) 00:11:48.000 Capacity (in LBAs): 131072 (0GiB) 00:11:48.000 Utilization (in LBAs): 131072 (0GiB) 00:11:48.000 NGUID: 59CE489BD11E4A4CB58DC631170EC291 00:11:48.000 UUID: 59ce489b-d11e-4a4c-b58d-c631170ec291 00:11:48.000 Thin Provisioning: Not Supported 00:11:48.000 Per-NS Atomic Units: Yes 00:11:48.000 Atomic Boundary Size (Normal): 0 00:11:48.000 Atomic Boundary Size (PFail): 0 00:11:48.000 Atomic Boundary Offset: 0 00:11:48.000 Maximum Single Source Range Length: 65535 00:11:48.000 Maximum Copy Length: 65535 00:11:48.000 Maximum Source Range Count: 1 00:11:48.000 NGUID/EUI64 Never Reused: No 00:11:48.000 Namespace Write Protected: No 00:11:48.000 Number of LBA Formats: 1 00:11:48.000 Current LBA Format: LBA Format #00 00:11:48.000 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:48.000 00:11:48.000 01:17:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:48.000 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.258 [2024-07-16 01:17:14.003325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:53.540 Initializing NVMe Controllers 00:11:53.540 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:53.540 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:53.540 Initialization complete. Launching workers. 00:11:53.540 ======================================================== 00:11:53.540 Latency(us) 00:11:53.540 Device Information : IOPS MiB/s Average min max 00:11:53.540 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39933.74 155.99 3205.14 954.24 9487.79 00:11:53.540 ======================================================== 00:11:53.540 Total : 39933.74 155.99 3205.14 954.24 9487.79 00:11:53.540 00:11:53.540 [2024-07-16 01:17:19.109580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:53.540 01:17:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:53.540 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.540 [2024-07-16 01:17:19.341262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:58.802 Initializing NVMe Controllers 00:11:58.802 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:58.802 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:58.802 Initialization complete. Launching workers. 00:11:58.802 ======================================================== 00:11:58.802 Latency(us) 00:11:58.802 Device Information : IOPS MiB/s Average min max 00:11:58.803 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39908.80 155.89 3207.53 948.09 10622.74 00:11:58.803 ======================================================== 00:11:58.803 Total : 39908.80 155.89 3207.53 948.09 10622.74 00:11:58.803 00:11:58.803 [2024-07-16 01:17:24.365873] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:58.803 01:17:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:58.803 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.803 [2024-07-16 01:17:24.560937] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:04.072 [2024-07-16 01:17:29.697431] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:04.072 Initializing NVMe Controllers 00:12:04.072 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:04.072 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:04.072 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:04.072 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:04.072 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:04.072 Initialization complete. Launching workers. 00:12:04.072 Starting thread on core 2 00:12:04.072 Starting thread on core 3 00:12:04.072 Starting thread on core 1 00:12:04.072 01:17:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:04.072 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.072 [2024-07-16 01:17:29.976739] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:07.359 [2024-07-16 01:17:33.037266] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:07.359 Initializing NVMe Controllers 00:12:07.359 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:07.359 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:07.359 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:07.359 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:07.359 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:07.359 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:07.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:07.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:07.359 Initialization complete. Launching workers. 00:12:07.359 Starting thread on core 1 with urgent priority queue 00:12:07.359 Starting thread on core 2 with urgent priority queue 00:12:07.359 Starting thread on core 3 with urgent priority queue 00:12:07.359 Starting thread on core 0 with urgent priority queue 00:12:07.359 SPDK bdev Controller (SPDK2 ) core 0: 8080.67 IO/s 12.38 secs/100000 ios 00:12:07.359 SPDK bdev Controller (SPDK2 ) core 1: 6685.67 IO/s 14.96 secs/100000 ios 00:12:07.359 SPDK bdev Controller (SPDK2 ) core 2: 9259.00 IO/s 10.80 secs/100000 ios 00:12:07.359 SPDK bdev Controller (SPDK2 ) core 3: 7213.67 IO/s 13.86 secs/100000 ios 00:12:07.359 ======================================================== 00:12:07.359 00:12:07.359 01:17:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:07.359 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.359 [2024-07-16 01:17:33.316778] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:07.359 Initializing NVMe Controllers 00:12:07.359 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:07.359 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:07.359 Namespace ID: 1 size: 0GB 00:12:07.359 Initialization complete. 00:12:07.359 INFO: using host memory buffer for IO 00:12:07.359 Hello world! 00:12:07.359 [2024-07-16 01:17:33.326837] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:07.617 01:17:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:07.617 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.617 [2024-07-16 01:17:33.597972] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:08.994 Initializing NVMe Controllers 00:12:08.994 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:08.994 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:08.994 Initialization complete. Launching workers. 00:12:08.994 submit (in ns) avg, min, max = 6536.0, 3161.9, 3998800.0 00:12:08.994 complete (in ns) avg, min, max = 20928.1, 1708.6, 5991600.0 00:12:08.994 00:12:08.994 Submit histogram 00:12:08.994 ================ 00:12:08.994 Range in us Cumulative Count 00:12:08.994 3.154 - 3.170: 0.0060% ( 1) 00:12:08.994 3.200 - 3.215: 0.0239% ( 3) 00:12:08.994 3.215 - 3.230: 0.1078% ( 14) 00:12:08.994 3.230 - 3.246: 0.2515% ( 24) 00:12:08.994 3.246 - 3.261: 0.7364% ( 81) 00:12:08.994 3.261 - 3.276: 2.5864% ( 309) 00:12:08.994 3.276 - 3.291: 6.6754% ( 683) 00:12:08.994 3.291 - 3.307: 12.7223% ( 1010) 00:12:08.994 3.307 - 3.322: 19.2181% ( 1085) 00:12:08.994 3.322 - 3.337: 26.2767% ( 1179) 00:12:08.994 3.337 - 3.352: 32.2218% ( 993) 00:12:08.994 3.352 - 3.368: 37.6040% ( 899) 00:12:08.994 3.368 - 3.383: 43.5072% ( 986) 00:12:08.994 3.383 - 3.398: 49.0331% ( 923) 00:12:08.994 3.398 - 3.413: 54.3136% ( 882) 00:12:08.994 3.413 - 3.429: 60.8513% ( 1092) 00:12:08.994 3.429 - 3.444: 68.1854% ( 1225) 00:12:08.994 3.444 - 3.459: 73.4060% ( 872) 00:12:08.994 3.459 - 3.474: 77.9082% ( 752) 00:12:08.994 3.474 - 3.490: 81.9793% ( 680) 00:12:08.994 3.490 - 3.505: 84.4459% ( 412) 00:12:08.994 3.505 - 3.520: 85.9846% ( 257) 00:12:08.994 3.520 - 3.535: 86.7569% ( 129) 00:12:08.994 3.535 - 3.550: 87.1760% ( 70) 00:12:08.994 3.550 - 3.566: 87.5711% ( 66) 00:12:08.994 3.566 - 3.581: 88.1638% ( 99) 00:12:08.994 3.581 - 3.596: 89.0020% ( 140) 00:12:08.994 3.596 - 3.611: 90.0916% ( 182) 00:12:08.994 3.611 - 3.627: 90.9717% ( 147) 00:12:08.994 3.627 - 3.642: 91.9535% ( 164) 00:12:08.994 3.642 - 3.657: 92.9474% ( 166) 00:12:08.994 3.657 - 3.672: 93.8514% ( 151) 00:12:08.994 3.672 - 3.688: 94.9889% ( 190) 00:12:08.994 3.688 - 3.703: 96.0127% ( 171) 00:12:08.994 3.703 - 3.718: 96.9526% ( 157) 00:12:08.994 3.718 - 3.733: 97.7070% ( 126) 00:12:08.994 3.733 - 3.749: 98.3536% ( 108) 00:12:08.994 3.749 - 3.764: 98.7787% ( 71) 00:12:08.994 3.764 - 3.779: 99.0421% ( 44) 00:12:08.994 3.779 - 3.794: 99.2876% ( 41) 00:12:08.994 3.794 - 3.810: 99.4312% ( 24) 00:12:08.994 3.810 - 3.825: 99.5330% ( 17) 00:12:08.994 3.825 - 3.840: 99.6049% ( 12) 00:12:08.994 3.840 - 3.855: 99.6468% ( 7) 00:12:08.994 3.855 - 3.870: 99.6587% ( 2) 00:12:08.994 3.870 - 3.886: 99.6827% ( 4) 00:12:08.994 3.901 - 3.931: 99.6947% ( 2) 00:12:08.994 3.931 - 3.962: 99.7007% ( 1) 00:12:08.994 4.053 - 4.084: 99.7066% ( 1) 00:12:08.994 5.242 - 5.272: 99.7126% ( 1) 00:12:08.994 5.303 - 5.333: 99.7186% ( 1) 00:12:08.994 5.364 - 5.394: 99.7246% ( 1) 00:12:08.994 5.486 - 5.516: 99.7306% ( 1) 00:12:08.994 5.516 - 5.547: 99.7366% ( 1) 00:12:08.994 5.638 - 5.669: 99.7426% ( 1) 00:12:08.994 5.669 - 5.699: 99.7485% ( 1) 00:12:08.994 5.760 - 5.790: 99.7545% ( 1) 00:12:08.994 5.882 - 5.912: 99.7605% ( 1) 00:12:08.994 5.973 - 6.004: 99.7665% ( 1) 00:12:08.994 6.034 - 6.065: 99.7725% ( 1) 00:12:08.994 6.095 - 6.126: 99.7785% ( 1) 00:12:08.994 6.217 - 6.248: 99.7905% ( 2) 00:12:08.994 6.278 - 6.309: 99.7964% ( 1) 00:12:08.994 6.339 - 6.370: 99.8024% ( 1) 00:12:08.994 6.430 - 6.461: 99.8084% ( 1) 00:12:08.994 6.461 - 6.491: 99.8144% ( 1) 00:12:08.994 6.552 - 6.583: 99.8204% ( 1) 00:12:08.994 6.583 - 6.613: 99.8264% ( 1) 00:12:08.994 6.644 - 6.674: 99.8324% ( 1) 00:12:08.994 6.796 - 6.827: 99.8384% ( 1) 00:12:08.994 6.918 - 6.949: 99.8443% ( 1) 00:12:08.994 7.010 - 7.040: 99.8503% ( 1) 00:12:08.994 7.040 - 7.070: 99.8563% ( 1) 00:12:08.994 7.101 - 7.131: 99.8623% ( 1) 00:12:08.994 7.436 - 7.467: 99.8683% ( 1) 00:12:08.994 7.497 - 7.528: 99.8743% ( 1) 00:12:08.994 7.589 - 7.619: 99.8803% ( 1) 00:12:08.994 7.650 - 7.680: 99.8862% ( 1) 00:12:08.994 7.771 - 7.802: 99.8922% ( 1) 00:12:08.995 [2024-07-16 01:17:34.700319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:08.995 8.594 - 8.655: 99.8982% ( 1) 00:12:08.995 8.838 - 8.899: 99.9042% ( 1) 00:12:08.995 8.899 - 8.960: 99.9102% ( 1) 00:12:08.995 10.118 - 10.179: 99.9162% ( 1) 00:12:08.995 13.958 - 14.019: 99.9222% ( 1) 00:12:08.995 3994.575 - 4025.783: 100.0000% ( 13) 00:12:08.995 00:12:08.995 Complete histogram 00:12:08.995 ================== 00:12:08.995 Range in us Cumulative Count 00:12:08.995 1.707 - 1.714: 0.0359% ( 6) 00:12:08.995 1.714 - 1.722: 0.1497% ( 19) 00:12:08.995 1.722 - 1.730: 0.3233% ( 29) 00:12:08.995 1.730 - 1.737: 0.3951% ( 12) 00:12:08.995 1.737 - 1.745: 0.4011% ( 1) 00:12:08.995 1.745 - 1.752: 0.4191% ( 3) 00:12:08.995 1.752 - 1.760: 1.6883% ( 212) 00:12:08.995 1.760 - 1.768: 17.0209% ( 2561) 00:12:08.995 1.768 - 1.775: 54.7926% ( 6309) 00:12:08.995 1.775 - 1.783: 80.4406% ( 4284) 00:12:08.995 1.783 - 1.790: 87.2538% ( 1138) 00:12:08.995 1.790 - 1.798: 90.0317% ( 464) 00:12:08.995 1.798 - 1.806: 92.4864% ( 410) 00:12:08.995 1.806 - 1.813: 93.7855% ( 217) 00:12:08.995 1.813 - 1.821: 94.3723% ( 98) 00:12:08.995 1.821 - 1.829: 94.8991% ( 88) 00:12:08.995 1.829 - 1.836: 95.6355% ( 123) 00:12:08.995 1.836 - 1.844: 96.5276% ( 149) 00:12:08.995 1.844 - 1.851: 97.4735% ( 158) 00:12:08.995 1.851 - 1.859: 98.2398% ( 128) 00:12:08.995 1.859 - 1.867: 98.7487% ( 85) 00:12:08.995 1.867 - 1.874: 98.9104% ( 27) 00:12:08.995 1.874 - 1.882: 99.0541% ( 24) 00:12:08.995 1.882 - 1.890: 99.1259% ( 12) 00:12:08.995 1.890 - 1.897: 99.2037% ( 13) 00:12:08.995 1.897 - 1.905: 99.2157% ( 2) 00:12:08.995 1.905 - 1.912: 99.2277% ( 2) 00:12:08.995 1.912 - 1.920: 99.2456% ( 3) 00:12:08.995 1.920 - 1.928: 99.2516% ( 1) 00:12:08.995 1.928 - 1.935: 99.2636% ( 2) 00:12:08.995 1.935 - 1.943: 99.2756% ( 2) 00:12:08.995 1.943 - 1.950: 99.2816% ( 1) 00:12:08.995 1.950 - 1.966: 99.3115% ( 5) 00:12:08.995 1.966 - 1.981: 99.3235% ( 2) 00:12:08.995 1.981 - 1.996: 99.3354% ( 2) 00:12:08.995 2.042 - 2.057: 99.3414% ( 1) 00:12:08.995 3.413 - 3.429: 99.3474% ( 1) 00:12:08.995 3.718 - 3.733: 99.3534% ( 1) 00:12:08.995 3.794 - 3.810: 99.3594% ( 1) 00:12:08.995 3.992 - 4.023: 99.3654% ( 1) 00:12:08.995 4.206 - 4.236: 99.3714% ( 1) 00:12:08.995 4.389 - 4.419: 99.3833% ( 2) 00:12:08.995 4.450 - 4.480: 99.3893% ( 1) 00:12:08.995 4.663 - 4.693: 99.4013% ( 2) 00:12:08.995 4.724 - 4.754: 99.4193% ( 3) 00:12:08.995 4.815 - 4.846: 99.4312% ( 2) 00:12:08.995 4.876 - 4.907: 99.4372% ( 1) 00:12:08.995 4.907 - 4.937: 99.4432% ( 1) 00:12:08.995 4.937 - 4.968: 99.4492% ( 1) 00:12:08.995 5.272 - 5.303: 99.4612% ( 2) 00:12:08.995 5.547 - 5.577: 99.4672% ( 1) 00:12:08.995 5.577 - 5.608: 99.4731% ( 1) 00:12:08.995 5.608 - 5.638: 99.4791% ( 1) 00:12:08.995 5.669 - 5.699: 99.4851% ( 1) 00:12:08.995 5.912 - 5.943: 99.4911% ( 1) 00:12:08.995 7.040 - 7.070: 99.4971% ( 1) 00:12:08.995 10.545 - 10.606: 99.5031% ( 1) 00:12:08.995 11.520 - 11.581: 99.5091% ( 1) 00:12:08.995 17.676 - 17.798: 99.5151% ( 1) 00:12:08.995 153.112 - 154.088: 99.5210% ( 1) 00:12:08.995 2044.099 - 2059.703: 99.5270% ( 1) 00:12:08.995 3994.575 - 4025.783: 99.9940% ( 78) 00:12:08.995 5960.655 - 5991.863: 100.0000% ( 1) 00:12:08.995 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:08.995 [ 00:12:08.995 { 00:12:08.995 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:08.995 "subtype": "Discovery", 00:12:08.995 "listen_addresses": [], 00:12:08.995 "allow_any_host": true, 00:12:08.995 "hosts": [] 00:12:08.995 }, 00:12:08.995 { 00:12:08.995 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:08.995 "subtype": "NVMe", 00:12:08.995 "listen_addresses": [ 00:12:08.995 { 00:12:08.995 "trtype": "VFIOUSER", 00:12:08.995 "adrfam": "IPv4", 00:12:08.995 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:08.995 "trsvcid": "0" 00:12:08.995 } 00:12:08.995 ], 00:12:08.995 "allow_any_host": true, 00:12:08.995 "hosts": [], 00:12:08.995 "serial_number": "SPDK1", 00:12:08.995 "model_number": "SPDK bdev Controller", 00:12:08.995 "max_namespaces": 32, 00:12:08.995 "min_cntlid": 1, 00:12:08.995 "max_cntlid": 65519, 00:12:08.995 "namespaces": [ 00:12:08.995 { 00:12:08.995 "nsid": 1, 00:12:08.995 "bdev_name": "Malloc1", 00:12:08.995 "name": "Malloc1", 00:12:08.995 "nguid": "E4C6C5F84D4C46AC8A2FBDE432FF3CDF", 00:12:08.995 "uuid": "e4c6c5f8-4d4c-46ac-8a2f-bde432ff3cdf" 00:12:08.995 }, 00:12:08.995 { 00:12:08.995 "nsid": 2, 00:12:08.995 "bdev_name": "Malloc3", 00:12:08.995 "name": "Malloc3", 00:12:08.995 "nguid": "FCCBF553DD8A4D3FAEFCCC8C6227A67E", 00:12:08.995 "uuid": "fccbf553-dd8a-4d3f-aefc-cc8c6227a67e" 00:12:08.995 } 00:12:08.995 ] 00:12:08.995 }, 00:12:08.995 { 00:12:08.995 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:08.995 "subtype": "NVMe", 00:12:08.995 "listen_addresses": [ 00:12:08.995 { 00:12:08.995 "trtype": "VFIOUSER", 00:12:08.995 "adrfam": "IPv4", 00:12:08.995 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:08.995 "trsvcid": "0" 00:12:08.995 } 00:12:08.995 ], 00:12:08.995 "allow_any_host": true, 00:12:08.995 "hosts": [], 00:12:08.995 "serial_number": "SPDK2", 00:12:08.995 "model_number": "SPDK bdev Controller", 00:12:08.995 "max_namespaces": 32, 00:12:08.995 "min_cntlid": 1, 00:12:08.995 "max_cntlid": 65519, 00:12:08.995 "namespaces": [ 00:12:08.995 { 00:12:08.995 "nsid": 1, 00:12:08.995 "bdev_name": "Malloc2", 00:12:08.995 "name": "Malloc2", 00:12:08.995 "nguid": "59CE489BD11E4A4CB58DC631170EC291", 00:12:08.995 "uuid": "59ce489b-d11e-4a4c-b58d-c631170ec291" 00:12:08.995 } 00:12:08.995 ] 00:12:08.995 } 00:12:08.995 ] 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3315534 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:08.995 01:17:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:08.995 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.254 [2024-07-16 01:17:35.072760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:09.254 Malloc4 00:12:09.254 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:09.512 [2024-07-16 01:17:35.268208] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:09.512 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:09.512 Asynchronous Event Request test 00:12:09.512 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:09.512 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:09.512 Registering asynchronous event callbacks... 00:12:09.512 Starting namespace attribute notice tests for all controllers... 00:12:09.512 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:09.512 aer_cb - Changed Namespace 00:12:09.512 Cleaning up... 00:12:09.512 [ 00:12:09.512 { 00:12:09.512 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:09.512 "subtype": "Discovery", 00:12:09.512 "listen_addresses": [], 00:12:09.512 "allow_any_host": true, 00:12:09.512 "hosts": [] 00:12:09.512 }, 00:12:09.512 { 00:12:09.512 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:09.512 "subtype": "NVMe", 00:12:09.512 "listen_addresses": [ 00:12:09.512 { 00:12:09.512 "trtype": "VFIOUSER", 00:12:09.512 "adrfam": "IPv4", 00:12:09.512 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:09.512 "trsvcid": "0" 00:12:09.512 } 00:12:09.512 ], 00:12:09.512 "allow_any_host": true, 00:12:09.512 "hosts": [], 00:12:09.512 "serial_number": "SPDK1", 00:12:09.512 "model_number": "SPDK bdev Controller", 00:12:09.512 "max_namespaces": 32, 00:12:09.512 "min_cntlid": 1, 00:12:09.512 "max_cntlid": 65519, 00:12:09.512 "namespaces": [ 00:12:09.512 { 00:12:09.512 "nsid": 1, 00:12:09.512 "bdev_name": "Malloc1", 00:12:09.512 "name": "Malloc1", 00:12:09.512 "nguid": "E4C6C5F84D4C46AC8A2FBDE432FF3CDF", 00:12:09.512 "uuid": "e4c6c5f8-4d4c-46ac-8a2f-bde432ff3cdf" 00:12:09.512 }, 00:12:09.512 { 00:12:09.512 "nsid": 2, 00:12:09.512 "bdev_name": "Malloc3", 00:12:09.512 "name": "Malloc3", 00:12:09.512 "nguid": "FCCBF553DD8A4D3FAEFCCC8C6227A67E", 00:12:09.512 "uuid": "fccbf553-dd8a-4d3f-aefc-cc8c6227a67e" 00:12:09.512 } 00:12:09.512 ] 00:12:09.512 }, 00:12:09.512 { 00:12:09.512 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:09.512 "subtype": "NVMe", 00:12:09.512 "listen_addresses": [ 00:12:09.512 { 00:12:09.512 "trtype": "VFIOUSER", 00:12:09.512 "adrfam": "IPv4", 00:12:09.512 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:09.512 "trsvcid": "0" 00:12:09.512 } 00:12:09.512 ], 00:12:09.512 "allow_any_host": true, 00:12:09.512 "hosts": [], 00:12:09.512 "serial_number": "SPDK2", 00:12:09.512 "model_number": "SPDK bdev Controller", 00:12:09.512 "max_namespaces": 32, 00:12:09.512 "min_cntlid": 1, 00:12:09.512 "max_cntlid": 65519, 00:12:09.512 "namespaces": [ 00:12:09.512 { 00:12:09.512 "nsid": 1, 00:12:09.512 "bdev_name": "Malloc2", 00:12:09.512 "name": "Malloc2", 00:12:09.512 "nguid": "59CE489BD11E4A4CB58DC631170EC291", 00:12:09.512 "uuid": "59ce489b-d11e-4a4c-b58d-c631170ec291" 00:12:09.512 }, 00:12:09.512 { 00:12:09.512 "nsid": 2, 00:12:09.512 "bdev_name": "Malloc4", 00:12:09.512 "name": "Malloc4", 00:12:09.512 "nguid": "5F52E1F28E674F0F8200DF4811BE4AAA", 00:12:09.512 "uuid": "5f52e1f2-8e67-4f0f-8200-df4811be4aaa" 00:12:09.512 } 00:12:09.512 ] 00:12:09.512 } 00:12:09.512 ] 00:12:09.512 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3315534 00:12:09.512 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:09.512 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3307698 00:12:09.512 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3307698 ']' 00:12:09.512 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3307698 00:12:09.512 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:09.512 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.512 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3307698 00:12:09.770 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:09.770 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:09.770 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3307698' 00:12:09.770 killing process with pid 3307698 00:12:09.770 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3307698 00:12:09.770 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3307698 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3315564 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3315564' 00:12:10.029 Process pid: 3315564 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3315564 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3315564 ']' 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:10.029 01:17:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:10.029 [2024-07-16 01:17:35.801804] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:10.029 [2024-07-16 01:17:35.802674] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:12:10.029 [2024-07-16 01:17:35.802712] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.029 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.029 [2024-07-16 01:17:35.859774] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.029 [2024-07-16 01:17:35.931270] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.029 [2024-07-16 01:17:35.931315] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.029 [2024-07-16 01:17:35.931321] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.029 [2024-07-16 01:17:35.931327] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.030 [2024-07-16 01:17:35.931331] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.030 [2024-07-16 01:17:35.931402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.030 [2024-07-16 01:17:35.931497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.030 [2024-07-16 01:17:35.931588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.030 [2024-07-16 01:17:35.931589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.030 [2024-07-16 01:17:36.009411] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:10.030 [2024-07-16 01:17:36.009530] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:10.030 [2024-07-16 01:17:36.009721] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:10.030 [2024-07-16 01:17:36.010031] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:10.030 [2024-07-16 01:17:36.010252] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:10.288 01:17:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:10.288 01:17:36 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:10.288 01:17:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:11.225 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:11.484 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:11.484 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:11.484 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:11.484 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:11.484 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:11.484 Malloc1 00:12:11.484 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:11.743 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:12.001 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:12.001 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:12.001 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:12.001 01:17:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:12.259 Malloc2 00:12:12.259 01:17:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:12.518 01:17:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:12.518 01:17:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3315564 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3315564 ']' 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3315564 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3315564 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3315564' 00:12:12.776 killing process with pid 3315564 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3315564 00:12:12.776 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3315564 00:12:13.034 01:17:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:13.034 01:17:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:13.034 00:12:13.034 real 0m50.545s 00:12:13.034 user 3m20.238s 00:12:13.034 sys 0m3.307s 00:12:13.034 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:13.034 01:17:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:13.034 ************************************ 00:12:13.034 END TEST nvmf_vfio_user 00:12:13.034 ************************************ 00:12:13.034 01:17:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:13.034 01:17:38 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:13.034 01:17:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:13.034 01:17:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.034 01:17:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:13.034 ************************************ 00:12:13.034 START TEST nvmf_vfio_user_nvme_compliance 00:12:13.034 ************************************ 00:12:13.034 01:17:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:13.293 * Looking for test storage... 00:12:13.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3316306 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3316306' 00:12:13.293 Process pid: 3316306 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3316306 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3316306 ']' 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.293 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:13.293 [2024-07-16 01:17:39.119909] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:12:13.293 [2024-07-16 01:17:39.119950] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.293 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.293 [2024-07-16 01:17:39.176425] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:13.293 [2024-07-16 01:17:39.255680] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.293 [2024-07-16 01:17:39.255715] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.293 [2024-07-16 01:17:39.255722] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.293 [2024-07-16 01:17:39.255727] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.293 [2024-07-16 01:17:39.255733] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.293 [2024-07-16 01:17:39.255771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.293 [2024-07-16 01:17:39.255788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.293 [2024-07-16 01:17:39.255789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.227 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.227 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:14.227 01:17:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:15.190 malloc0 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.190 01:17:40 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:15.190 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.190 00:12:15.190 00:12:15.190 CUnit - A unit testing framework for C - Version 2.1-3 00:12:15.190 http://cunit.sourceforge.net/ 00:12:15.190 00:12:15.190 00:12:15.190 Suite: nvme_compliance 00:12:15.190 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-16 01:17:41.142334] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:15.190 [2024-07-16 01:17:41.143674] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:15.190 [2024-07-16 01:17:41.143688] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:15.190 [2024-07-16 01:17:41.143694] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:15.190 [2024-07-16 01:17:41.148370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:15.190 passed 00:12:15.449 Test: admin_identify_ctrlr_verify_fused ...[2024-07-16 01:17:41.225928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:15.449 [2024-07-16 01:17:41.228946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:15.449 passed 00:12:15.449 Test: admin_identify_ns ...[2024-07-16 01:17:41.309393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:15.449 [2024-07-16 01:17:41.370350] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:15.449 [2024-07-16 01:17:41.378353] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:15.449 [2024-07-16 01:17:41.399435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:15.449 passed 00:12:15.707 Test: admin_get_features_mandatory_features ...[2024-07-16 01:17:41.480378] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:15.707 [2024-07-16 01:17:41.483393] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:15.707 passed 00:12:15.707 Test: admin_get_features_optional_features ...[2024-07-16 01:17:41.562909] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:15.707 [2024-07-16 01:17:41.565927] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:15.707 passed 00:12:15.707 Test: admin_set_features_number_of_queues ...[2024-07-16 01:17:41.643420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:15.966 [2024-07-16 01:17:41.749422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:15.966 passed 00:12:15.966 Test: admin_get_log_page_mandatory_logs ...[2024-07-16 01:17:41.827101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:15.966 [2024-07-16 01:17:41.830126] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:15.966 passed 00:12:15.966 Test: admin_get_log_page_with_lpo ...[2024-07-16 01:17:41.908638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:16.225 [2024-07-16 01:17:41.976358] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:16.225 [2024-07-16 01:17:41.989412] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:16.225 passed 00:12:16.225 Test: fabric_property_get ...[2024-07-16 01:17:42.066370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:16.225 [2024-07-16 01:17:42.067603] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:16.225 [2024-07-16 01:17:42.069384] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:16.225 passed 00:12:16.225 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-16 01:17:42.149873] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:16.225 [2024-07-16 01:17:42.151096] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:16.225 [2024-07-16 01:17:42.152895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:16.225 passed 00:12:16.483 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-16 01:17:42.231636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:16.483 [2024-07-16 01:17:42.315343] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:16.483 [2024-07-16 01:17:42.331353] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:16.483 [2024-07-16 01:17:42.336419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:16.483 passed 00:12:16.483 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-16 01:17:42.414149] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:16.483 [2024-07-16 01:17:42.415387] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:16.483 [2024-07-16 01:17:42.417174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:16.483 passed 00:12:16.741 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-16 01:17:42.495832] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:16.741 [2024-07-16 01:17:42.572348] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:16.741 [2024-07-16 01:17:42.596353] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:16.741 [2024-07-16 01:17:42.601430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:16.741 passed 00:12:16.741 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-16 01:17:42.678564] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:16.741 [2024-07-16 01:17:42.679800] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:16.741 [2024-07-16 01:17:42.679823] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:16.741 [2024-07-16 01:17:42.681588] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:16.741 passed 00:12:17.000 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-16 01:17:42.760185] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:17.000 [2024-07-16 01:17:42.852353] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:17.000 [2024-07-16 01:17:42.860345] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:17.000 [2024-07-16 01:17:42.867348] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:17.000 [2024-07-16 01:17:42.876352] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:17.000 [2024-07-16 01:17:42.905430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:17.000 passed 00:12:17.000 Test: admin_create_io_sq_verify_pc ...[2024-07-16 01:17:42.982410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:17.258 [2024-07-16 01:17:43.000352] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:17.258 [2024-07-16 01:17:43.017596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:17.258 passed 00:12:17.258 Test: admin_create_io_qp_max_qps ...[2024-07-16 01:17:43.094074] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:18.633 [2024-07-16 01:17:44.182346] nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:18.633 [2024-07-16 01:17:44.562557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:18.633 passed 00:12:18.891 Test: admin_create_io_sq_shared_cq ...[2024-07-16 01:17:44.640444] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:18.891 [2024-07-16 01:17:44.773348] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:18.891 [2024-07-16 01:17:44.810407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:18.891 passed 00:12:18.891 00:12:18.891 Run Summary: Type Total Ran Passed Failed Inactive 00:12:18.891 suites 1 1 n/a 0 0 00:12:18.891 tests 18 18 18 0 0 00:12:18.891 asserts 360 360 360 0 n/a 00:12:18.891 00:12:18.891 Elapsed time = 1.512 seconds 00:12:18.891 01:17:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3316306 00:12:18.891 01:17:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3316306 ']' 00:12:18.891 01:17:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3316306 00:12:18.891 01:17:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:18.891 01:17:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:18.891 01:17:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3316306 00:12:19.150 01:17:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:19.150 01:17:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:19.150 01:17:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3316306' 00:12:19.150 killing process with pid 3316306 00:12:19.150 01:17:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3316306 00:12:19.150 01:17:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3316306 00:12:19.150 01:17:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:19.150 01:17:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:19.150 00:12:19.150 real 0m6.131s 00:12:19.150 user 0m17.595s 00:12:19.150 sys 0m0.447s 00:12:19.150 01:17:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.150 01:17:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:19.150 ************************************ 00:12:19.150 END TEST nvmf_vfio_user_nvme_compliance 00:12:19.150 ************************************ 00:12:19.150 01:17:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:19.150 01:17:45 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:19.150 01:17:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:19.150 01:17:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.150 01:17:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.409 ************************************ 00:12:19.409 START TEST nvmf_vfio_user_fuzz 00:12:19.409 ************************************ 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:19.409 * Looking for test storage... 00:12:19.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.409 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3317297 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3317297' 00:12:19.410 Process pid: 3317297 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3317297 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3317297 ']' 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.410 01:17:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:20.346 01:17:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.346 01:17:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:20.346 01:17:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:21.284 malloc0 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:21.284 01:17:47 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:53.367 Fuzzing completed. Shutting down the fuzz application 00:12:53.367 00:12:53.367 Dumping successful admin opcodes: 00:12:53.367 8, 9, 10, 24, 00:12:53.367 Dumping successful io opcodes: 00:12:53.367 0, 00:12:53.367 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1022470, total successful commands: 4021, random_seed: 1692649408 00:12:53.367 NS: 0x200003a1ef00 admin qp, Total commands completed: 251586, total successful commands: 2033, random_seed: 1181732608 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3317297 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3317297 ']' 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3317297 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3317297 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3317297' 00:12:53.367 killing process with pid 3317297 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3317297 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3317297 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:53.367 00:12:53.367 real 0m32.806s 00:12:53.367 user 0m31.138s 00:12:53.367 sys 0m30.494s 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:53.367 01:18:17 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:53.367 ************************************ 00:12:53.367 END TEST nvmf_vfio_user_fuzz 00:12:53.367 ************************************ 00:12:53.367 01:18:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:53.367 01:18:18 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:53.367 01:18:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:53.367 01:18:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.367 01:18:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:53.367 ************************************ 00:12:53.367 START TEST nvmf_host_management 00:12:53.367 ************************************ 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:53.367 * Looking for test storage... 00:12:53.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:53.367 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:53.368 01:18:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:57.570 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:57.570 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.570 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:57.571 Found net devices under 0000:86:00.0: cvl_0_0 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:57.571 Found net devices under 0000:86:00.1: cvl_0_1 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:12:57.571 00:12:57.571 --- 10.0.0.2 ping statistics --- 00:12:57.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.571 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:12:57.571 00:12:57.571 --- 10.0.0.1 ping statistics --- 00:12:57.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.571 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3326323 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3326323 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3326323 ']' 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.571 01:18:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.571 [2024-07-16 01:18:23.427484] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:12:57.571 [2024-07-16 01:18:23.427525] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.571 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.571 [2024-07-16 01:18:23.485102] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.830 [2024-07-16 01:18:23.564246] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.830 [2024-07-16 01:18:23.564280] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.830 [2024-07-16 01:18:23.564287] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.830 [2024-07-16 01:18:23.564293] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.830 [2024-07-16 01:18:23.564298] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.830 [2024-07-16 01:18:23.564403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.830 [2024-07-16 01:18:23.564487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.830 [2024-07-16 01:18:23.564596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.830 [2024-07-16 01:18:23.564596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:58.395 [2024-07-16 01:18:24.277345] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:58.395 Malloc0 00:12:58.395 [2024-07-16 01:18:24.336887] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:58.395 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3326580 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3326580 /var/tmp/bdevperf.sock 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3326580 ']' 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:58.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:58.653 { 00:12:58.653 "params": { 00:12:58.653 "name": "Nvme$subsystem", 00:12:58.653 "trtype": "$TEST_TRANSPORT", 00:12:58.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:58.653 "adrfam": "ipv4", 00:12:58.653 "trsvcid": "$NVMF_PORT", 00:12:58.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:58.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:58.653 "hdgst": ${hdgst:-false}, 00:12:58.653 "ddgst": ${ddgst:-false} 00:12:58.653 }, 00:12:58.653 "method": "bdev_nvme_attach_controller" 00:12:58.653 } 00:12:58.653 EOF 00:12:58.653 )") 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:58.653 01:18:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:58.653 "params": { 00:12:58.653 "name": "Nvme0", 00:12:58.653 "trtype": "tcp", 00:12:58.653 "traddr": "10.0.0.2", 00:12:58.653 "adrfam": "ipv4", 00:12:58.653 "trsvcid": "4420", 00:12:58.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:58.653 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:58.653 "hdgst": false, 00:12:58.653 "ddgst": false 00:12:58.653 }, 00:12:58.653 "method": "bdev_nvme_attach_controller" 00:12:58.653 }' 00:12:58.653 [2024-07-16 01:18:24.427803] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:12:58.653 [2024-07-16 01:18:24.427850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326580 ] 00:12:58.653 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.653 [2024-07-16 01:18:24.483836] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.653 [2024-07-16 01:18:24.555946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.911 Running I/O for 10 seconds... 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.478 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:59.478 [2024-07-16 01:18:25.300179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.478 [2024-07-16 01:18:25.300220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.478 [2024-07-16 01:18:25.300241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.478 [2024-07-16 01:18:25.300248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.478 [2024-07-16 01:18:25.300258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.478 [2024-07-16 01:18:25.300265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.478 [2024-07-16 01:18:25.300274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.478 [2024-07-16 01:18:25.300280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.478 [2024-07-16 01:18:25.300288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.478 [2024-07-16 01:18:25.300295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.478 [2024-07-16 01:18:25.300303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.300990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.300998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.301006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.301013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.479 [2024-07-16 01:18:25.301022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.479 [2024-07-16 01:18:25.301030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:59.480 [2024-07-16 01:18:25.301287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.480 [2024-07-16 01:18:25.301294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a40cd0 is same with the state(5) to be set 00:12:59.480 [2024-07-16 01:18:25.301350] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a40cd0 was disconnected and freed. reset controller. 00:12:59.480 [2024-07-16 01:18:25.302260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:59.480 task offset: 24064 on job bdev=Nvme0n1 fails 00:12:59.480 00:12:59.480 Latency(us) 00:12:59.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.480 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:59.480 Job: Nvme0n1 ended in about 0.59 seconds with error 00:12:59.480 Verification LBA range: start 0x0 length 0x400 00:12:59.480 Nvme0n1 : 0.59 1948.86 121.80 108.27 0.00 30472.43 1607.19 26588.89 00:12:59.480 =================================================================================================================== 00:12:59.480 Total : 1948.86 121.80 108.27 0.00 30472.43 1607.19 26588.89 00:12:59.480 [2024-07-16 01:18:25.303819] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:59.480 [2024-07-16 01:18:25.303835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162fad0 (9): Bad file descriptor 00:12:59.480 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.480 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:59.480 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.480 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:59.480 [2024-07-16 01:18:25.311452] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:59.480 01:18:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.480 01:18:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3326580 00:13:00.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3326580) - No such process 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:00.411 { 00:13:00.411 "params": { 00:13:00.411 "name": "Nvme$subsystem", 00:13:00.411 "trtype": "$TEST_TRANSPORT", 00:13:00.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:00.411 "adrfam": "ipv4", 00:13:00.411 "trsvcid": "$NVMF_PORT", 00:13:00.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:00.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:00.411 "hdgst": ${hdgst:-false}, 00:13:00.411 "ddgst": ${ddgst:-false} 00:13:00.411 }, 00:13:00.411 "method": "bdev_nvme_attach_controller" 00:13:00.411 } 00:13:00.411 EOF 00:13:00.411 )") 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:00.411 01:18:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:00.411 "params": { 00:13:00.411 "name": "Nvme0", 00:13:00.411 "trtype": "tcp", 00:13:00.411 "traddr": "10.0.0.2", 00:13:00.411 "adrfam": "ipv4", 00:13:00.411 "trsvcid": "4420", 00:13:00.411 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:00.411 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:00.411 "hdgst": false, 00:13:00.411 "ddgst": false 00:13:00.411 }, 00:13:00.411 "method": "bdev_nvme_attach_controller" 00:13:00.411 }' 00:13:00.411 [2024-07-16 01:18:26.366771] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:13:00.411 [2024-07-16 01:18:26.366817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326840 ] 00:13:00.411 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.669 [2024-07-16 01:18:26.423781] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.669 [2024-07-16 01:18:26.492797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.669 Running I/O for 1 seconds... 00:13:02.044 00:13:02.044 Latency(us) 00:13:02.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.044 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:02.044 Verification LBA range: start 0x0 length 0x400 00:13:02.044 Nvme0n1 : 1.01 2028.12 126.76 0.00 0.00 31067.13 5804.62 26588.89 00:13:02.044 =================================================================================================================== 00:13:02.044 Total : 2028.12 126.76 0.00 0.00 31067.13 5804.62 26588.89 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:02.044 rmmod nvme_tcp 00:13:02.044 rmmod nvme_fabrics 00:13:02.044 rmmod nvme_keyring 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3326323 ']' 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3326323 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3326323 ']' 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3326323 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3326323 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3326323' 00:13:02.044 killing process with pid 3326323 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3326323 00:13:02.044 01:18:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3326323 00:13:02.302 [2024-07-16 01:18:28.126079] app.c: 716:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:02.302 01:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:02.302 01:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:02.302 01:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:02.303 01:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.303 01:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:02.303 01:18:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.303 01:18:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.303 01:18:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.834 01:18:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:04.834 01:18:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:04.834 00:13:04.834 real 0m12.178s 00:13:04.834 user 0m22.101s 00:13:04.834 sys 0m5.006s 00:13:04.834 01:18:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:04.834 01:18:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:04.834 ************************************ 00:13:04.834 END TEST nvmf_host_management 00:13:04.834 ************************************ 00:13:04.834 01:18:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:04.834 01:18:30 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:04.834 01:18:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:04.834 01:18:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.834 01:18:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:04.834 ************************************ 00:13:04.834 START TEST nvmf_lvol 00:13:04.834 ************************************ 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:04.834 * Looking for test storage... 00:13:04.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:04.834 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:04.835 01:18:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:09.022 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:09.022 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:09.022 Found net devices under 0000:86:00.0: cvl_0_0 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:09.022 Found net devices under 0000:86:00.1: cvl_0_1 00:13:09.022 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.023 01:18:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.281 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.281 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.281 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:13:09.281 00:13:09.281 --- 10.0.0.2 ping statistics --- 00:13:09.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.281 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:09.281 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:13:09.281 00:13:09.281 --- 10.0.0.1 ping statistics --- 00:13:09.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.281 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:09.281 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3330377 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3330377 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3330377 ']' 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.282 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:09.282 [2024-07-16 01:18:35.151717] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:13:09.282 [2024-07-16 01:18:35.151766] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.282 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.282 [2024-07-16 01:18:35.209686] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:09.552 [2024-07-16 01:18:35.288388] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.552 [2024-07-16 01:18:35.288422] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.552 [2024-07-16 01:18:35.288429] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.552 [2024-07-16 01:18:35.288435] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.552 [2024-07-16 01:18:35.288440] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.552 [2024-07-16 01:18:35.288504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.552 [2024-07-16 01:18:35.288601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.552 [2024-07-16 01:18:35.288602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.168 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.168 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:10.168 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.168 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:10.168 01:18:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:10.168 01:18:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.168 01:18:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:10.168 [2024-07-16 01:18:36.117828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.168 01:18:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:10.426 01:18:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:10.426 01:18:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:10.685 01:18:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:10.685 01:18:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:10.943 01:18:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:10.943 01:18:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4f24398d-cf96-471a-ba04-c3fbc1e33d11 00:13:10.943 01:18:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4f24398d-cf96-471a-ba04-c3fbc1e33d11 lvol 20 00:13:11.202 01:18:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bf25e098-7a64-4b20-a6c2-7fc08d33413a 00:13:11.202 01:18:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:11.460 01:18:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bf25e098-7a64-4b20-a6c2-7fc08d33413a 00:13:11.460 01:18:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:11.719 [2024-07-16 01:18:37.584356] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.719 01:18:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:11.977 01:18:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3330870 00:13:11.977 01:18:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:11.977 01:18:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:11.977 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.912 01:18:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot bf25e098-7a64-4b20-a6c2-7fc08d33413a MY_SNAPSHOT 00:13:13.169 01:18:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9b844c96-72de-4132-b987-e78711a0c7d2 00:13:13.169 01:18:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize bf25e098-7a64-4b20-a6c2-7fc08d33413a 30 00:13:13.427 01:18:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9b844c96-72de-4132-b987-e78711a0c7d2 MY_CLONE 00:13:13.684 01:18:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=74425341-6604-4211-8fff-697863dfd6b5 00:13:13.684 01:18:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 74425341-6604-4211-8fff-697863dfd6b5 00:13:14.249 01:18:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3330870 00:13:22.354 Initializing NVMe Controllers 00:13:22.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:22.354 Controller IO queue size 128, less than required. 00:13:22.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:22.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:22.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:22.354 Initialization complete. Launching workers. 00:13:22.354 ======================================================== 00:13:22.354 Latency(us) 00:13:22.354 Device Information : IOPS MiB/s Average min max 00:13:22.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12433.70 48.57 10267.48 496.36 60501.85 00:13:22.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12305.40 48.07 10376.20 2917.29 59039.67 00:13:22.354 ======================================================== 00:13:22.354 Total : 24739.10 96.64 10321.56 496.36 60501.85 00:13:22.354 00:13:22.354 01:18:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:22.612 01:18:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bf25e098-7a64-4b20-a6c2-7fc08d33413a 00:13:22.870 01:18:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f24398d-cf96-471a-ba04-c3fbc1e33d11 00:13:22.870 01:18:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:22.870 01:18:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:22.870 01:18:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:22.870 01:18:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:22.870 01:18:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:22.870 01:18:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:22.870 01:18:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:22.870 01:18:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:22.870 01:18:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:22.870 rmmod nvme_tcp 00:13:22.870 rmmod nvme_fabrics 00:13:22.870 rmmod nvme_keyring 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3330377 ']' 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3330377 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3330377 ']' 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3330377 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3330377 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3330377' 00:13:23.129 killing process with pid 3330377 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3330377 00:13:23.129 01:18:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3330377 00:13:23.388 01:18:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:23.388 01:18:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:23.388 01:18:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:23.388 01:18:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.388 01:18:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.388 01:18:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.388 01:18:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.388 01:18:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.314 01:18:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:25.315 00:13:25.315 real 0m20.916s 00:13:25.315 user 1m3.663s 00:13:25.315 sys 0m6.221s 00:13:25.315 01:18:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:25.315 01:18:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:25.315 ************************************ 00:13:25.315 END TEST nvmf_lvol 00:13:25.315 ************************************ 00:13:25.315 01:18:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:25.315 01:18:51 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:25.315 01:18:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:25.315 01:18:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.315 01:18:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:25.315 ************************************ 00:13:25.315 START TEST nvmf_lvs_grow 00:13:25.315 ************************************ 00:13:25.315 01:18:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:25.573 * Looking for test storage... 00:13:25.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.573 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:25.574 01:18:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:30.841 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:30.841 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:30.841 Found net devices under 0000:86:00.0: cvl_0_0 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:30.841 Found net devices under 0000:86:00.1: cvl_0_1 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:13:30.841 00:13:30.841 --- 10.0.0.2 ping statistics --- 00:13:30.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.841 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:13:30.841 00:13:30.841 --- 10.0.0.1 ping statistics --- 00:13:30.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.841 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3336228 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3336228 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3336228 ']' 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.841 01:18:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.842 01:18:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.842 01:18:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:30.842 [2024-07-16 01:18:56.506944] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:13:30.842 [2024-07-16 01:18:56.506985] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.842 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.842 [2024-07-16 01:18:56.560519] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.842 [2024-07-16 01:18:56.637328] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.842 [2024-07-16 01:18:56.637372] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.842 [2024-07-16 01:18:56.637379] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.842 [2024-07-16 01:18:56.637385] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.842 [2024-07-16 01:18:56.637390] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.842 [2024-07-16 01:18:56.637425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.408 01:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.408 01:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:31.408 01:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:31.408 01:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:31.408 01:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:31.408 01:18:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.408 01:18:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:31.666 [2024-07-16 01:18:57.491225] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:31.666 ************************************ 00:13:31.666 START TEST lvs_grow_clean 00:13:31.666 ************************************ 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:31.666 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:31.924 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:31.924 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:32.182 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:32.182 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:32.182 01:18:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:32.182 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:32.182 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:32.182 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b15812f7-218c-4658-a47b-17bac27cd4c8 lvol 150 00:13:32.440 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b26f934f-22f3-4a59-a775-ead4aadf4bb1 00:13:32.440 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:32.440 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:32.440 [2024-07-16 01:18:58.415960] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:32.440 [2024-07-16 01:18:58.416010] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:32.440 true 00:13:32.698 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:32.698 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:32.698 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:32.698 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:32.956 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b26f934f-22f3-4a59-a775-ead4aadf4bb1 00:13:32.956 01:18:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:33.213 [2024-07-16 01:18:59.073941] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.213 01:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:33.472 01:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3336731 00:13:33.472 01:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:33.472 01:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:33.472 01:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3336731 /var/tmp/bdevperf.sock 00:13:33.472 01:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3336731 ']' 00:13:33.472 01:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.472 01:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.472 01:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.472 01:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.472 01:18:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:33.472 [2024-07-16 01:18:59.300307] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:13:33.472 [2024-07-16 01:18:59.300360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3336731 ] 00:13:33.472 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.472 [2024-07-16 01:18:59.354041] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.472 [2024-07-16 01:18:59.424983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.404 01:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.404 01:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:34.404 01:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:34.662 Nvme0n1 00:13:34.662 01:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:34.662 [ 00:13:34.662 { 00:13:34.662 "name": "Nvme0n1", 00:13:34.662 "aliases": [ 00:13:34.662 "b26f934f-22f3-4a59-a775-ead4aadf4bb1" 00:13:34.662 ], 00:13:34.662 "product_name": "NVMe disk", 00:13:34.662 "block_size": 4096, 00:13:34.662 "num_blocks": 38912, 00:13:34.662 "uuid": "b26f934f-22f3-4a59-a775-ead4aadf4bb1", 00:13:34.662 "assigned_rate_limits": { 00:13:34.662 "rw_ios_per_sec": 0, 00:13:34.662 "rw_mbytes_per_sec": 0, 00:13:34.662 "r_mbytes_per_sec": 0, 00:13:34.662 "w_mbytes_per_sec": 0 00:13:34.662 }, 00:13:34.662 "claimed": false, 00:13:34.662 "zoned": false, 00:13:34.662 "supported_io_types": { 00:13:34.662 "read": true, 00:13:34.662 "write": true, 00:13:34.662 "unmap": true, 00:13:34.662 "flush": true, 00:13:34.662 "reset": true, 00:13:34.662 "nvme_admin": true, 00:13:34.662 "nvme_io": true, 00:13:34.662 "nvme_io_md": false, 00:13:34.662 "write_zeroes": true, 00:13:34.662 "zcopy": false, 00:13:34.662 "get_zone_info": false, 00:13:34.662 "zone_management": false, 00:13:34.662 "zone_append": false, 00:13:34.662 "compare": true, 00:13:34.662 "compare_and_write": true, 00:13:34.662 "abort": true, 00:13:34.662 "seek_hole": false, 00:13:34.662 "seek_data": false, 00:13:34.662 "copy": true, 00:13:34.662 "nvme_iov_md": false 00:13:34.662 }, 00:13:34.662 "memory_domains": [ 00:13:34.662 { 00:13:34.662 "dma_device_id": "system", 00:13:34.662 "dma_device_type": 1 00:13:34.662 } 00:13:34.662 ], 00:13:34.662 "driver_specific": { 00:13:34.662 "nvme": [ 00:13:34.662 { 00:13:34.662 "trid": { 00:13:34.662 "trtype": "TCP", 00:13:34.662 "adrfam": "IPv4", 00:13:34.662 "traddr": "10.0.0.2", 00:13:34.662 "trsvcid": "4420", 00:13:34.662 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:34.662 }, 00:13:34.662 "ctrlr_data": { 00:13:34.662 "cntlid": 1, 00:13:34.662 "vendor_id": "0x8086", 00:13:34.662 "model_number": "SPDK bdev Controller", 00:13:34.662 "serial_number": "SPDK0", 00:13:34.662 "firmware_revision": "24.09", 00:13:34.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:34.662 "oacs": { 00:13:34.662 "security": 0, 00:13:34.662 "format": 0, 00:13:34.662 "firmware": 0, 00:13:34.662 "ns_manage": 0 00:13:34.662 }, 00:13:34.662 "multi_ctrlr": true, 00:13:34.662 "ana_reporting": false 00:13:34.662 }, 00:13:34.662 "vs": { 00:13:34.662 "nvme_version": "1.3" 00:13:34.662 }, 00:13:34.662 "ns_data": { 00:13:34.662 "id": 1, 00:13:34.662 "can_share": true 00:13:34.662 } 00:13:34.662 } 00:13:34.662 ], 00:13:34.662 "mp_policy": "active_passive" 00:13:34.662 } 00:13:34.662 } 00:13:34.662 ] 00:13:34.662 01:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3336963 00:13:34.662 01:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:34.662 01:19:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:34.919 Running I/O for 10 seconds... 00:13:35.850 Latency(us) 00:13:35.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.850 Nvme0n1 : 1.00 23746.00 92.76 0.00 0.00 0.00 0.00 0.00 00:13:35.850 =================================================================================================================== 00:13:35.850 Total : 23746.00 92.76 0.00 0.00 0.00 0.00 0.00 00:13:35.850 00:13:36.783 01:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:36.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.783 Nvme0n1 : 2.00 24014.50 93.81 0.00 0.00 0.00 0.00 0.00 00:13:36.783 =================================================================================================================== 00:13:36.783 Total : 24014.50 93.81 0.00 0.00 0.00 0.00 0.00 00:13:36.783 00:13:36.783 true 00:13:37.040 01:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:37.040 01:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:37.040 01:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:37.040 01:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:37.041 01:19:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3336963 00:13:37.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.974 Nvme0n1 : 3.00 24038.33 93.90 0.00 0.00 0.00 0.00 0.00 00:13:37.974 =================================================================================================================== 00:13:37.974 Total : 24038.33 93.90 0.00 0.00 0.00 0.00 0.00 00:13:37.974 00:13:38.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.908 Nvme0n1 : 4.00 24093.25 94.11 0.00 0.00 0.00 0.00 0.00 00:13:38.908 =================================================================================================================== 00:13:38.908 Total : 24093.25 94.11 0.00 0.00 0.00 0.00 0.00 00:13:38.908 00:13:39.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.843 Nvme0n1 : 5.00 24125.60 94.24 0.00 0.00 0.00 0.00 0.00 00:13:39.843 =================================================================================================================== 00:13:39.843 Total : 24125.60 94.24 0.00 0.00 0.00 0.00 0.00 00:13:39.843 00:13:40.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.823 Nvme0n1 : 6.00 24169.33 94.41 0.00 0.00 0.00 0.00 0.00 00:13:40.823 =================================================================================================================== 00:13:40.823 Total : 24169.33 94.41 0.00 0.00 0.00 0.00 0.00 00:13:40.823 00:13:41.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.796 Nvme0n1 : 7.00 24200.29 94.53 0.00 0.00 0.00 0.00 0.00 00:13:41.796 =================================================================================================================== 00:13:41.796 Total : 24200.29 94.53 0.00 0.00 0.00 0.00 0.00 00:13:41.796 00:13:42.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.733 Nvme0n1 : 8.00 24223.62 94.62 0.00 0.00 0.00 0.00 0.00 00:13:42.733 =================================================================================================================== 00:13:42.733 Total : 24223.62 94.62 0.00 0.00 0.00 0.00 0.00 00:13:42.733 00:13:44.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:44.109 Nvme0n1 : 9.00 24244.00 94.70 0.00 0.00 0.00 0.00 0.00 00:13:44.109 =================================================================================================================== 00:13:44.109 Total : 24244.00 94.70 0.00 0.00 0.00 0.00 0.00 00:13:44.109 00:13:45.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.045 Nvme0n1 : 10.00 24231.50 94.65 0.00 0.00 0.00 0.00 0.00 00:13:45.045 =================================================================================================================== 00:13:45.045 Total : 24231.50 94.65 0.00 0.00 0.00 0.00 0.00 00:13:45.045 00:13:45.045 00:13:45.045 Latency(us) 00:13:45.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:45.045 Nvme0n1 : 10.00 24233.41 94.66 0.00 0.00 5279.05 3120.76 13918.60 00:13:45.045 =================================================================================================================== 00:13:45.045 Total : 24233.41 94.66 0.00 0.00 5279.05 3120.76 13918.60 00:13:45.045 0 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3336731 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3336731 ']' 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3336731 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3336731 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3336731' 00:13:45.045 killing process with pid 3336731 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3336731 00:13:45.045 Received shutdown signal, test time was about 10.000000 seconds 00:13:45.045 00:13:45.045 Latency(us) 00:13:45.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.045 =================================================================================================================== 00:13:45.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3336731 00:13:45.045 01:19:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:45.304 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:45.562 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:45.562 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:45.562 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:45.562 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:45.562 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:45.821 [2024-07-16 01:19:11.644569] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:45.821 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:46.079 request: 00:13:46.079 { 00:13:46.079 "uuid": "b15812f7-218c-4658-a47b-17bac27cd4c8", 00:13:46.079 "method": "bdev_lvol_get_lvstores", 00:13:46.079 "req_id": 1 00:13:46.079 } 00:13:46.079 Got JSON-RPC error response 00:13:46.079 response: 00:13:46.079 { 00:13:46.079 "code": -19, 00:13:46.079 "message": "No such device" 00:13:46.079 } 00:13:46.079 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:46.079 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:46.079 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:46.079 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:46.079 01:19:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:46.079 aio_bdev 00:13:46.079 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b26f934f-22f3-4a59-a775-ead4aadf4bb1 00:13:46.079 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=b26f934f-22f3-4a59-a775-ead4aadf4bb1 00:13:46.079 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:46.079 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:46.079 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:46.079 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:46.079 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:46.338 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b26f934f-22f3-4a59-a775-ead4aadf4bb1 -t 2000 00:13:46.596 [ 00:13:46.596 { 00:13:46.596 "name": "b26f934f-22f3-4a59-a775-ead4aadf4bb1", 00:13:46.596 "aliases": [ 00:13:46.596 "lvs/lvol" 00:13:46.596 ], 00:13:46.596 "product_name": "Logical Volume", 00:13:46.596 "block_size": 4096, 00:13:46.596 "num_blocks": 38912, 00:13:46.596 "uuid": "b26f934f-22f3-4a59-a775-ead4aadf4bb1", 00:13:46.596 "assigned_rate_limits": { 00:13:46.596 "rw_ios_per_sec": 0, 00:13:46.596 "rw_mbytes_per_sec": 0, 00:13:46.596 "r_mbytes_per_sec": 0, 00:13:46.596 "w_mbytes_per_sec": 0 00:13:46.596 }, 00:13:46.596 "claimed": false, 00:13:46.596 "zoned": false, 00:13:46.596 "supported_io_types": { 00:13:46.596 "read": true, 00:13:46.596 "write": true, 00:13:46.596 "unmap": true, 00:13:46.596 "flush": false, 00:13:46.596 "reset": true, 00:13:46.596 "nvme_admin": false, 00:13:46.596 "nvme_io": false, 00:13:46.596 "nvme_io_md": false, 00:13:46.596 "write_zeroes": true, 00:13:46.596 "zcopy": false, 00:13:46.596 "get_zone_info": false, 00:13:46.596 "zone_management": false, 00:13:46.596 "zone_append": false, 00:13:46.596 "compare": false, 00:13:46.596 "compare_and_write": false, 00:13:46.596 "abort": false, 00:13:46.596 "seek_hole": true, 00:13:46.596 "seek_data": true, 00:13:46.596 "copy": false, 00:13:46.596 "nvme_iov_md": false 00:13:46.596 }, 00:13:46.596 "driver_specific": { 00:13:46.596 "lvol": { 00:13:46.596 "lvol_store_uuid": "b15812f7-218c-4658-a47b-17bac27cd4c8", 00:13:46.596 "base_bdev": "aio_bdev", 00:13:46.596 "thin_provision": false, 00:13:46.596 "num_allocated_clusters": 38, 00:13:46.596 "snapshot": false, 00:13:46.596 "clone": false, 00:13:46.596 "esnap_clone": false 00:13:46.596 } 00:13:46.596 } 00:13:46.596 } 00:13:46.596 ] 00:13:46.596 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:46.596 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:46.596 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:46.596 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:46.596 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:46.596 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:46.855 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:46.855 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b26f934f-22f3-4a59-a775-ead4aadf4bb1 00:13:47.113 01:19:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b15812f7-218c-4658-a47b-17bac27cd4c8 00:13:47.113 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:47.372 00:13:47.372 real 0m15.685s 00:13:47.372 user 0m15.381s 00:13:47.372 sys 0m1.395s 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:47.372 ************************************ 00:13:47.372 END TEST lvs_grow_clean 00:13:47.372 ************************************ 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:47.372 ************************************ 00:13:47.372 START TEST lvs_grow_dirty 00:13:47.372 ************************************ 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:47.372 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:47.630 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:47.630 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:47.887 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8c023de1-9635-418a-9569-572e7541835a 00:13:47.887 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c023de1-9635-418a-9569-572e7541835a 00:13:47.887 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:47.887 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:47.887 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:47.887 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8c023de1-9635-418a-9569-572e7541835a lvol 150 00:13:48.145 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=17d3c59f-73a0-4e19-86c3-fd420656af80 00:13:48.145 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:48.145 01:19:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:48.404 [2024-07-16 01:19:14.141774] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:48.404 [2024-07-16 01:19:14.141821] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:48.404 true 00:13:48.404 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c023de1-9635-418a-9569-572e7541835a 00:13:48.404 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:48.404 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:48.404 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:48.663 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 17d3c59f-73a0-4e19-86c3-fd420656af80 00:13:48.922 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:48.922 [2024-07-16 01:19:14.811761] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.922 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:49.181 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3339328 00:13:49.181 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:49.181 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:49.181 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3339328 /var/tmp/bdevperf.sock 00:13:49.181 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3339328 ']' 00:13:49.181 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:49.181 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.181 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:49.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:49.181 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.181 01:19:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:49.181 [2024-07-16 01:19:15.032657] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:13:49.181 [2024-07-16 01:19:15.032703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339328 ] 00:13:49.181 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.181 [2024-07-16 01:19:15.086937] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.181 [2024-07-16 01:19:15.158076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.115 01:19:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.115 01:19:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:50.115 01:19:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:50.115 Nvme0n1 00:13:50.374 01:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:50.374 [ 00:13:50.374 { 00:13:50.374 "name": "Nvme0n1", 00:13:50.374 "aliases": [ 00:13:50.374 "17d3c59f-73a0-4e19-86c3-fd420656af80" 00:13:50.374 ], 00:13:50.374 "product_name": "NVMe disk", 00:13:50.374 "block_size": 4096, 00:13:50.374 "num_blocks": 38912, 00:13:50.374 "uuid": "17d3c59f-73a0-4e19-86c3-fd420656af80", 00:13:50.374 "assigned_rate_limits": { 00:13:50.374 "rw_ios_per_sec": 0, 00:13:50.374 "rw_mbytes_per_sec": 0, 00:13:50.374 "r_mbytes_per_sec": 0, 00:13:50.374 "w_mbytes_per_sec": 0 00:13:50.374 }, 00:13:50.374 "claimed": false, 00:13:50.374 "zoned": false, 00:13:50.374 "supported_io_types": { 00:13:50.374 "read": true, 00:13:50.374 "write": true, 00:13:50.374 "unmap": true, 00:13:50.374 "flush": true, 00:13:50.374 "reset": true, 00:13:50.374 "nvme_admin": true, 00:13:50.374 "nvme_io": true, 00:13:50.374 "nvme_io_md": false, 00:13:50.374 "write_zeroes": true, 00:13:50.374 "zcopy": false, 00:13:50.374 "get_zone_info": false, 00:13:50.374 "zone_management": false, 00:13:50.374 "zone_append": false, 00:13:50.374 "compare": true, 00:13:50.374 "compare_and_write": true, 00:13:50.374 "abort": true, 00:13:50.374 "seek_hole": false, 00:13:50.374 "seek_data": false, 00:13:50.374 "copy": true, 00:13:50.374 "nvme_iov_md": false 00:13:50.374 }, 00:13:50.374 "memory_domains": [ 00:13:50.374 { 00:13:50.374 "dma_device_id": "system", 00:13:50.374 "dma_device_type": 1 00:13:50.374 } 00:13:50.374 ], 00:13:50.374 "driver_specific": { 00:13:50.374 "nvme": [ 00:13:50.374 { 00:13:50.374 "trid": { 00:13:50.374 "trtype": "TCP", 00:13:50.374 "adrfam": "IPv4", 00:13:50.374 "traddr": "10.0.0.2", 00:13:50.374 "trsvcid": "4420", 00:13:50.374 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:50.374 }, 00:13:50.374 "ctrlr_data": { 00:13:50.374 "cntlid": 1, 00:13:50.374 "vendor_id": "0x8086", 00:13:50.374 "model_number": "SPDK bdev Controller", 00:13:50.374 "serial_number": "SPDK0", 00:13:50.374 "firmware_revision": "24.09", 00:13:50.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:50.374 "oacs": { 00:13:50.374 "security": 0, 00:13:50.374 "format": 0, 00:13:50.374 "firmware": 0, 00:13:50.374 "ns_manage": 0 00:13:50.374 }, 00:13:50.374 "multi_ctrlr": true, 00:13:50.374 "ana_reporting": false 00:13:50.374 }, 00:13:50.374 "vs": { 00:13:50.374 "nvme_version": "1.3" 00:13:50.374 }, 00:13:50.374 "ns_data": { 00:13:50.374 "id": 1, 00:13:50.374 "can_share": true 00:13:50.374 } 00:13:50.374 } 00:13:50.374 ], 00:13:50.374 "mp_policy": "active_passive" 00:13:50.374 } 00:13:50.374 } 00:13:50.374 ] 00:13:50.374 01:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3339568 00:13:50.374 01:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:50.374 01:19:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:50.374 Running I/O for 10 seconds... 00:13:51.748 Latency(us) 00:13:51.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.748 Nvme0n1 : 1.00 22862.00 89.30 0.00 0.00 0.00 0.00 0.00 00:13:51.748 =================================================================================================================== 00:13:51.748 Total : 22862.00 89.30 0.00 0.00 0.00 0.00 0.00 00:13:51.748 00:13:52.315 01:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8c023de1-9635-418a-9569-572e7541835a 00:13:52.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.573 Nvme0n1 : 2.00 22947.00 89.64 0.00 0.00 0.00 0.00 0.00 00:13:52.573 =================================================================================================================== 00:13:52.573 Total : 22947.00 89.64 0.00 0.00 0.00 0.00 0.00 00:13:52.573 00:13:52.573 true 00:13:52.573 01:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c023de1-9635-418a-9569-572e7541835a 00:13:52.573 01:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:52.832 01:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:52.832 01:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:52.832 01:19:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3339568 00:13:53.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.399 Nvme0n1 : 3.00 22964.67 89.71 0.00 0.00 0.00 0.00 0.00 00:13:53.399 =================================================================================================================== 00:13:53.399 Total : 22964.67 89.71 0.00 0.00 0.00 0.00 0.00 00:13:53.399 00:13:54.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.775 Nvme0n1 : 4.00 23003.50 89.86 0.00 0.00 0.00 0.00 0.00 00:13:54.775 =================================================================================================================== 00:13:54.775 Total : 23003.50 89.86 0.00 0.00 0.00 0.00 0.00 00:13:54.775 00:13:55.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.711 Nvme0n1 : 5.00 23046.00 90.02 0.00 0.00 0.00 0.00 0.00 00:13:55.711 =================================================================================================================== 00:13:55.711 Total : 23046.00 90.02 0.00 0.00 0.00 0.00 0.00 00:13:55.711 00:13:56.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.646 Nvme0n1 : 6.00 23081.00 90.16 0.00 0.00 0.00 0.00 0.00 00:13:56.646 =================================================================================================================== 00:13:56.646 Total : 23081.00 90.16 0.00 0.00 0.00 0.00 0.00 00:13:56.646 00:13:57.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.581 Nvme0n1 : 7.00 23106.00 90.26 0.00 0.00 0.00 0.00 0.00 00:13:57.581 =================================================================================================================== 00:13:57.581 Total : 23106.00 90.26 0.00 0.00 0.00 0.00 0.00 00:13:57.581 00:13:58.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.512 Nvme0n1 : 8.00 23100.75 90.24 0.00 0.00 0.00 0.00 0.00 00:13:58.512 =================================================================================================================== 00:13:58.512 Total : 23100.75 90.24 0.00 0.00 0.00 0.00 0.00 00:13:58.512 00:13:59.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.445 Nvme0n1 : 9.00 23121.56 90.32 0.00 0.00 0.00 0.00 0.00 00:13:59.445 =================================================================================================================== 00:13:59.445 Total : 23121.56 90.32 0.00 0.00 0.00 0.00 0.00 00:13:59.445 00:14:00.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.818 Nvme0n1 : 10.00 23131.80 90.36 0.00 0.00 0.00 0.00 0.00 00:14:00.818 =================================================================================================================== 00:14:00.818 Total : 23131.80 90.36 0.00 0.00 0.00 0.00 0.00 00:14:00.818 00:14:00.818 00:14:00.818 Latency(us) 00:14:00.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.818 Nvme0n1 : 10.01 23131.24 90.36 0.00 0.00 5529.65 1474.56 7333.79 00:14:00.818 =================================================================================================================== 00:14:00.818 Total : 23131.24 90.36 0.00 0.00 5529.65 1474.56 7333.79 00:14:00.818 0 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3339328 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3339328 ']' 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3339328 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3339328 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3339328' 00:14:00.818 killing process with pid 3339328 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3339328 00:14:00.818 Received shutdown signal, test time was about 10.000000 seconds 00:14:00.818 00:14:00.818 Latency(us) 00:14:00.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.818 =================================================================================================================== 00:14:00.818 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3339328 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:00.818 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:01.076 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c023de1-9635-418a-9569-572e7541835a 00:14:01.076 01:19:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3336228 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3336228 00:14:01.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3336228 Killed "${NVMF_APP[@]}" "$@" 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3341407 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3341407 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3341407 ']' 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.334 01:19:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:01.334 [2024-07-16 01:19:27.251096] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:14:01.334 [2024-07-16 01:19:27.251145] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.334 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.334 [2024-07-16 01:19:27.309760] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.592 [2024-07-16 01:19:27.387878] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.592 [2024-07-16 01:19:27.387910] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.592 [2024-07-16 01:19:27.387917] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.592 [2024-07-16 01:19:27.387923] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.592 [2024-07-16 01:19:27.387928] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.592 [2024-07-16 01:19:27.387961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.156 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.156 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:02.156 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.156 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.156 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:02.156 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.156 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:02.413 [2024-07-16 01:19:28.234935] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:02.413 [2024-07-16 01:19:28.235026] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:02.413 [2024-07-16 01:19:28.235050] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:02.413 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:02.413 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 17d3c59f-73a0-4e19-86c3-fd420656af80 00:14:02.413 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=17d3c59f-73a0-4e19-86c3-fd420656af80 00:14:02.413 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:02.413 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:02.413 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:02.413 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:02.413 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:02.671 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 17d3c59f-73a0-4e19-86c3-fd420656af80 -t 2000 00:14:02.671 [ 00:14:02.671 { 00:14:02.671 "name": "17d3c59f-73a0-4e19-86c3-fd420656af80", 00:14:02.671 "aliases": [ 00:14:02.671 "lvs/lvol" 00:14:02.671 ], 00:14:02.671 "product_name": "Logical Volume", 00:14:02.671 "block_size": 4096, 00:14:02.671 "num_blocks": 38912, 00:14:02.671 "uuid": "17d3c59f-73a0-4e19-86c3-fd420656af80", 00:14:02.671 "assigned_rate_limits": { 00:14:02.671 "rw_ios_per_sec": 0, 00:14:02.671 "rw_mbytes_per_sec": 0, 00:14:02.671 "r_mbytes_per_sec": 0, 00:14:02.671 "w_mbytes_per_sec": 0 00:14:02.671 }, 00:14:02.671 "claimed": false, 00:14:02.671 "zoned": false, 00:14:02.671 "supported_io_types": { 00:14:02.671 "read": true, 00:14:02.671 "write": true, 00:14:02.671 "unmap": true, 00:14:02.671 "flush": false, 00:14:02.671 "reset": true, 00:14:02.671 "nvme_admin": false, 00:14:02.671 "nvme_io": false, 00:14:02.671 "nvme_io_md": false, 00:14:02.671 "write_zeroes": true, 00:14:02.671 "zcopy": false, 00:14:02.671 "get_zone_info": false, 00:14:02.671 "zone_management": false, 00:14:02.671 "zone_append": false, 00:14:02.671 "compare": false, 00:14:02.671 "compare_and_write": false, 00:14:02.671 "abort": false, 00:14:02.671 "seek_hole": true, 00:14:02.671 "seek_data": true, 00:14:02.671 "copy": false, 00:14:02.671 "nvme_iov_md": false 00:14:02.671 }, 00:14:02.671 "driver_specific": { 00:14:02.671 "lvol": { 00:14:02.671 "lvol_store_uuid": "8c023de1-9635-418a-9569-572e7541835a", 00:14:02.671 "base_bdev": "aio_bdev", 00:14:02.671 "thin_provision": false, 00:14:02.671 "num_allocated_clusters": 38, 00:14:02.671 "snapshot": false, 00:14:02.671 "clone": false, 00:14:02.671 "esnap_clone": false 00:14:02.671 } 00:14:02.671 } 00:14:02.671 } 00:14:02.671 ] 00:14:02.671 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:02.671 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c023de1-9635-418a-9569-572e7541835a 00:14:02.671 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:02.929 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:02.929 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c023de1-9635-418a-9569-572e7541835a 00:14:02.929 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:02.929 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:02.929 01:19:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:03.187 [2024-07-16 01:19:29.067600] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c023de1-9635-418a-9569-572e7541835a 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c023de1-9635-418a-9569-572e7541835a 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:03.187 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c023de1-9635-418a-9569-572e7541835a 00:14:03.445 request: 00:14:03.445 { 00:14:03.445 "uuid": "8c023de1-9635-418a-9569-572e7541835a", 00:14:03.445 "method": "bdev_lvol_get_lvstores", 00:14:03.445 "req_id": 1 00:14:03.445 } 00:14:03.445 Got JSON-RPC error response 00:14:03.445 response: 00:14:03.445 { 00:14:03.445 "code": -19, 00:14:03.445 "message": "No such device" 00:14:03.445 } 00:14:03.445 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:03.445 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.445 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:03.446 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.446 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:03.446 aio_bdev 00:14:03.704 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 17d3c59f-73a0-4e19-86c3-fd420656af80 00:14:03.704 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=17d3c59f-73a0-4e19-86c3-fd420656af80 00:14:03.704 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:03.704 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:03.704 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:03.704 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:03.704 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:03.704 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 17d3c59f-73a0-4e19-86c3-fd420656af80 -t 2000 00:14:03.962 [ 00:14:03.962 { 00:14:03.962 "name": "17d3c59f-73a0-4e19-86c3-fd420656af80", 00:14:03.962 "aliases": [ 00:14:03.962 "lvs/lvol" 00:14:03.963 ], 00:14:03.963 "product_name": "Logical Volume", 00:14:03.963 "block_size": 4096, 00:14:03.963 "num_blocks": 38912, 00:14:03.963 "uuid": "17d3c59f-73a0-4e19-86c3-fd420656af80", 00:14:03.963 "assigned_rate_limits": { 00:14:03.963 "rw_ios_per_sec": 0, 00:14:03.963 "rw_mbytes_per_sec": 0, 00:14:03.963 "r_mbytes_per_sec": 0, 00:14:03.963 "w_mbytes_per_sec": 0 00:14:03.963 }, 00:14:03.963 "claimed": false, 00:14:03.963 "zoned": false, 00:14:03.963 "supported_io_types": { 00:14:03.963 "read": true, 00:14:03.963 "write": true, 00:14:03.963 "unmap": true, 00:14:03.963 "flush": false, 00:14:03.963 "reset": true, 00:14:03.963 "nvme_admin": false, 00:14:03.963 "nvme_io": false, 00:14:03.963 "nvme_io_md": false, 00:14:03.963 "write_zeroes": true, 00:14:03.963 "zcopy": false, 00:14:03.963 "get_zone_info": false, 00:14:03.963 "zone_management": false, 00:14:03.963 "zone_append": false, 00:14:03.963 "compare": false, 00:14:03.963 "compare_and_write": false, 00:14:03.963 "abort": false, 00:14:03.963 "seek_hole": true, 00:14:03.963 "seek_data": true, 00:14:03.963 "copy": false, 00:14:03.963 "nvme_iov_md": false 00:14:03.963 }, 00:14:03.963 "driver_specific": { 00:14:03.963 "lvol": { 00:14:03.963 "lvol_store_uuid": "8c023de1-9635-418a-9569-572e7541835a", 00:14:03.963 "base_bdev": "aio_bdev", 00:14:03.963 "thin_provision": false, 00:14:03.963 "num_allocated_clusters": 38, 00:14:03.963 "snapshot": false, 00:14:03.963 "clone": false, 00:14:03.963 "esnap_clone": false 00:14:03.963 } 00:14:03.963 } 00:14:03.963 } 00:14:03.963 ] 00:14:03.963 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:03.963 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c023de1-9635-418a-9569-572e7541835a 00:14:03.963 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:03.963 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:03.963 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c023de1-9635-418a-9569-572e7541835a 00:14:03.963 01:19:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:04.221 01:19:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:04.221 01:19:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 17d3c59f-73a0-4e19-86c3-fd420656af80 00:14:04.481 01:19:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c023de1-9635-418a-9569-572e7541835a 00:14:04.481 01:19:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:04.768 00:14:04.768 real 0m17.321s 00:14:04.768 user 0m44.597s 00:14:04.768 sys 0m3.778s 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:04.768 ************************************ 00:14:04.768 END TEST lvs_grow_dirty 00:14:04.768 ************************************ 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:04.768 nvmf_trace.0 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.768 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.768 rmmod nvme_tcp 00:14:04.768 rmmod nvme_fabrics 00:14:04.768 rmmod nvme_keyring 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3341407 ']' 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3341407 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3341407 ']' 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3341407 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3341407 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3341407' 00:14:05.039 killing process with pid 3341407 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3341407 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3341407 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.039 01:19:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.571 01:19:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:07.571 00:14:07.571 real 0m41.769s 00:14:07.571 user 1m5.485s 00:14:07.571 sys 0m9.386s 00:14:07.571 01:19:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.571 01:19:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:07.571 ************************************ 00:14:07.571 END TEST nvmf_lvs_grow 00:14:07.571 ************************************ 00:14:07.571 01:19:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:07.571 01:19:33 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:07.571 01:19:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:07.571 01:19:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.571 01:19:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.571 ************************************ 00:14:07.571 START TEST nvmf_bdev_io_wait 00:14:07.572 ************************************ 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:07.572 * Looking for test storage... 00:14:07.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.572 01:19:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:12.842 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:12.842 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:12.842 Found net devices under 0000:86:00.0: cvl_0_0 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:12.842 Found net devices under 0000:86:00.1: cvl_0_1 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.842 01:19:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:12.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:14:12.842 00:14:12.842 --- 10.0.0.2 ping statistics --- 00:14:12.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.842 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:14:12.842 00:14:12.842 --- 10.0.0.1 ping statistics --- 00:14:12.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.842 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3345448 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3345448 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3345448 ']' 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.842 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.843 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.843 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.843 01:19:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:12.843 [2024-07-16 01:19:38.266276] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:14:12.843 [2024-07-16 01:19:38.266316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.843 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.843 [2024-07-16 01:19:38.324261] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.843 [2024-07-16 01:19:38.396970] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.843 [2024-07-16 01:19:38.397011] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.843 [2024-07-16 01:19:38.397017] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.843 [2024-07-16 01:19:38.397023] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.843 [2024-07-16 01:19:38.397028] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.843 [2024-07-16 01:19:38.397081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.843 [2024-07-16 01:19:38.397180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.843 [2024-07-16 01:19:38.397244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.843 [2024-07-16 01:19:38.397245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.101 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.101 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:13.101 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.101 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.101 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:13.361 [2024-07-16 01:19:39.187882] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:13.361 Malloc0 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:13.361 [2024-07-16 01:19:39.251144] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3345651 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3345654 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:13.361 { 00:14:13.361 "params": { 00:14:13.361 "name": "Nvme$subsystem", 00:14:13.361 "trtype": "$TEST_TRANSPORT", 00:14:13.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.361 "adrfam": "ipv4", 00:14:13.361 "trsvcid": "$NVMF_PORT", 00:14:13.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.361 "hdgst": ${hdgst:-false}, 00:14:13.361 "ddgst": ${ddgst:-false} 00:14:13.361 }, 00:14:13.361 "method": "bdev_nvme_attach_controller" 00:14:13.361 } 00:14:13.361 EOF 00:14:13.361 )") 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3345657 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:13.361 { 00:14:13.361 "params": { 00:14:13.361 "name": "Nvme$subsystem", 00:14:13.361 "trtype": "$TEST_TRANSPORT", 00:14:13.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.361 "adrfam": "ipv4", 00:14:13.361 "trsvcid": "$NVMF_PORT", 00:14:13.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.361 "hdgst": ${hdgst:-false}, 00:14:13.361 "ddgst": ${ddgst:-false} 00:14:13.361 }, 00:14:13.361 "method": "bdev_nvme_attach_controller" 00:14:13.361 } 00:14:13.361 EOF 00:14:13.361 )") 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3345661 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:13.361 { 00:14:13.361 "params": { 00:14:13.361 "name": "Nvme$subsystem", 00:14:13.361 "trtype": "$TEST_TRANSPORT", 00:14:13.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.361 "adrfam": "ipv4", 00:14:13.361 "trsvcid": "$NVMF_PORT", 00:14:13.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.361 "hdgst": ${hdgst:-false}, 00:14:13.361 "ddgst": ${ddgst:-false} 00:14:13.361 }, 00:14:13.361 "method": "bdev_nvme_attach_controller" 00:14:13.361 } 00:14:13.361 EOF 00:14:13.361 )") 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:13.361 { 00:14:13.361 "params": { 00:14:13.361 "name": "Nvme$subsystem", 00:14:13.361 "trtype": "$TEST_TRANSPORT", 00:14:13.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.361 "adrfam": "ipv4", 00:14:13.361 "trsvcid": "$NVMF_PORT", 00:14:13.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.361 "hdgst": ${hdgst:-false}, 00:14:13.361 "ddgst": ${ddgst:-false} 00:14:13.361 }, 00:14:13.361 "method": "bdev_nvme_attach_controller" 00:14:13.361 } 00:14:13.361 EOF 00:14:13.361 )") 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3345651 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:13.361 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:13.361 "params": { 00:14:13.361 "name": "Nvme1", 00:14:13.361 "trtype": "tcp", 00:14:13.362 "traddr": "10.0.0.2", 00:14:13.362 "adrfam": "ipv4", 00:14:13.362 "trsvcid": "4420", 00:14:13.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.362 "hdgst": false, 00:14:13.362 "ddgst": false 00:14:13.362 }, 00:14:13.362 "method": "bdev_nvme_attach_controller" 00:14:13.362 }' 00:14:13.362 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:13.362 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:13.362 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:13.362 "params": { 00:14:13.362 "name": "Nvme1", 00:14:13.362 "trtype": "tcp", 00:14:13.362 "traddr": "10.0.0.2", 00:14:13.362 "adrfam": "ipv4", 00:14:13.362 "trsvcid": "4420", 00:14:13.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.362 "hdgst": false, 00:14:13.362 "ddgst": false 00:14:13.362 }, 00:14:13.362 "method": "bdev_nvme_attach_controller" 00:14:13.362 }' 00:14:13.362 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:13.362 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:13.362 "params": { 00:14:13.362 "name": "Nvme1", 00:14:13.362 "trtype": "tcp", 00:14:13.362 "traddr": "10.0.0.2", 00:14:13.362 "adrfam": "ipv4", 00:14:13.362 "trsvcid": "4420", 00:14:13.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.362 "hdgst": false, 00:14:13.362 "ddgst": false 00:14:13.362 }, 00:14:13.362 "method": "bdev_nvme_attach_controller" 00:14:13.362 }' 00:14:13.362 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:13.362 01:19:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:13.362 "params": { 00:14:13.362 "name": "Nvme1", 00:14:13.362 "trtype": "tcp", 00:14:13.362 "traddr": "10.0.0.2", 00:14:13.362 "adrfam": "ipv4", 00:14:13.362 "trsvcid": "4420", 00:14:13.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.362 "hdgst": false, 00:14:13.362 "ddgst": false 00:14:13.362 }, 00:14:13.362 "method": "bdev_nvme_attach_controller" 00:14:13.362 }' 00:14:13.362 [2024-07-16 01:19:39.299407] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:14:13.362 [2024-07-16 01:19:39.299459] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:13.362 [2024-07-16 01:19:39.302077] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:14:13.362 [2024-07-16 01:19:39.302121] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:13.362 [2024-07-16 01:19:39.306086] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:14:13.362 [2024-07-16 01:19:39.306124] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:13.362 [2024-07-16 01:19:39.307150] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:14:13.362 [2024-07-16 01:19:39.307196] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:13.621 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.621 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.621 [2024-07-16 01:19:39.481978] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.621 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.621 [2024-07-16 01:19:39.558723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:13.621 [2024-07-16 01:19:39.577736] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.621 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.879 [2024-07-16 01:19:39.630775] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.879 [2024-07-16 01:19:39.668196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:13.879 [2024-07-16 01:19:39.688960] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.879 [2024-07-16 01:19:39.703700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:13.879 [2024-07-16 01:19:39.765510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:13.879 Running I/O for 1 seconds... 00:14:14.137 Running I/O for 1 seconds... 00:14:14.137 Running I/O for 1 seconds... 00:14:14.137 Running I/O for 1 seconds... 00:14:15.069 00:14:15.069 Latency(us) 00:14:15.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.069 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:15.069 Nvme1n1 : 1.00 252024.74 984.47 0.00 0.00 506.23 205.78 647.56 00:14:15.069 =================================================================================================================== 00:14:15.069 Total : 252024.74 984.47 0.00 0.00 506.23 205.78 647.56 00:14:15.069 00:14:15.069 Latency(us) 00:14:15.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.069 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:15.069 Nvme1n1 : 1.01 8202.99 32.04 0.00 0.00 15485.79 5898.24 24841.26 00:14:15.069 =================================================================================================================== 00:14:15.069 Total : 8202.99 32.04 0.00 0.00 15485.79 5898.24 24841.26 00:14:15.069 00:14:15.069 Latency(us) 00:14:15.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.069 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:15.069 Nvme1n1 : 1.01 11537.90 45.07 0.00 0.00 11054.76 6303.94 22094.99 00:14:15.069 =================================================================================================================== 00:14:15.069 Total : 11537.90 45.07 0.00 0.00 11054.76 6303.94 22094.99 00:14:15.069 00:14:15.069 Latency(us) 00:14:15.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.069 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:15.069 Nvme1n1 : 1.00 7648.66 29.88 0.00 0.00 16691.49 4681.14 39446.43 00:14:15.069 =================================================================================================================== 00:14:15.069 Total : 7648.66 29.88 0.00 0.00 16691.49 4681.14 39446.43 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3345654 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3345657 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3345661 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.328 rmmod nvme_tcp 00:14:15.328 rmmod nvme_fabrics 00:14:15.328 rmmod nvme_keyring 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3345448 ']' 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3345448 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3345448 ']' 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3345448 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.328 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3345448 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3345448' 00:14:15.587 killing process with pid 3345448 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3345448 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3345448 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.587 01:19:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.118 01:19:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:18.118 00:14:18.118 real 0m10.467s 00:14:18.118 user 0m19.313s 00:14:18.118 sys 0m5.315s 00:14:18.118 01:19:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.118 01:19:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:18.118 ************************************ 00:14:18.118 END TEST nvmf_bdev_io_wait 00:14:18.118 ************************************ 00:14:18.118 01:19:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:18.118 01:19:43 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:18.118 01:19:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:18.118 01:19:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.118 01:19:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:18.118 ************************************ 00:14:18.118 START TEST nvmf_queue_depth 00:14:18.118 ************************************ 00:14:18.118 01:19:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:18.118 * Looking for test storage... 00:14:18.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:18.119 01:19:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.402 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:23.403 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:23.403 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:23.403 Found net devices under 0000:86:00.0: cvl_0_0 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:23.403 Found net devices under 0000:86:00.1: cvl_0_1 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:23.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:14:23.403 00:14:23.403 --- 10.0.0.2 ping statistics --- 00:14:23.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.403 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:14:23.403 00:14:23.403 --- 10.0.0.1 ping statistics --- 00:14:23.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.403 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3349448 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3349448 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3349448 ']' 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.403 01:19:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.403 [2024-07-16 01:19:49.031537] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:14:23.403 [2024-07-16 01:19:49.031583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.403 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.403 [2024-07-16 01:19:49.088813] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.403 [2024-07-16 01:19:49.166592] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.403 [2024-07-16 01:19:49.166627] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.403 [2024-07-16 01:19:49.166634] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.403 [2024-07-16 01:19:49.166640] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.403 [2024-07-16 01:19:49.166644] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.403 [2024-07-16 01:19:49.166663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.970 [2024-07-16 01:19:49.860808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.970 Malloc0 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:23.970 [2024-07-16 01:19:49.919291] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3349515 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3349515 /var/tmp/bdevperf.sock 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3349515 ']' 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.970 01:19:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:24.229 [2024-07-16 01:19:49.969915] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:14:24.229 [2024-07-16 01:19:49.969954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349515 ] 00:14:24.229 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.229 [2024-07-16 01:19:50.025678] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.229 [2024-07-16 01:19:50.108423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.796 01:19:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.796 01:19:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:24.796 01:19:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:24.796 01:19:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.796 01:19:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:25.055 NVMe0n1 00:14:25.055 01:19:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.055 01:19:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:25.315 Running I/O for 10 seconds... 00:14:35.285 00:14:35.285 Latency(us) 00:14:35.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.285 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:35.285 Verification LBA range: start 0x0 length 0x4000 00:14:35.285 NVMe0n1 : 10.05 12645.04 49.39 0.00 0.00 80707.63 10423.34 56423.38 00:14:35.285 =================================================================================================================== 00:14:35.285 Total : 12645.04 49.39 0.00 0.00 80707.63 10423.34 56423.38 00:14:35.285 0 00:14:35.285 01:20:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3349515 00:14:35.285 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3349515 ']' 00:14:35.285 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3349515 00:14:35.285 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:35.285 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.285 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3349515 00:14:35.285 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:35.285 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:35.285 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3349515' 00:14:35.285 killing process with pid 3349515 00:14:35.285 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3349515 00:14:35.285 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.285 00:14:35.285 Latency(us) 00:14:35.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.285 =================================================================================================================== 00:14:35.285 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.285 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3349515 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.543 rmmod nvme_tcp 00:14:35.543 rmmod nvme_fabrics 00:14:35.543 rmmod nvme_keyring 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3349448 ']' 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3349448 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3349448 ']' 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3349448 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3349448 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3349448' 00:14:35.543 killing process with pid 3349448 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3349448 00:14:35.543 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3349448 00:14:35.800 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.800 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.800 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.800 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.800 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.800 01:20:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.800 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.800 01:20:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.340 01:20:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.340 00:14:38.340 real 0m20.131s 00:14:38.340 user 0m24.956s 00:14:38.340 sys 0m5.484s 00:14:38.340 01:20:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:38.340 01:20:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:38.340 ************************************ 00:14:38.340 END TEST nvmf_queue_depth 00:14:38.341 ************************************ 00:14:38.341 01:20:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:38.341 01:20:03 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:38.341 01:20:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:38.341 01:20:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.341 01:20:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.341 ************************************ 00:14:38.341 START TEST nvmf_target_multipath 00:14:38.341 ************************************ 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:38.341 * Looking for test storage... 00:14:38.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.341 01:20:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:43.609 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:43.609 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:43.609 Found net devices under 0000:86:00.0: cvl_0_0 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:43.609 Found net devices under 0000:86:00.1: cvl_0_1 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.609 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.610 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:43.610 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.610 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:43.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:14:43.868 00:14:43.868 --- 10.0.0.2 ping statistics --- 00:14:43.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.868 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:14:43.868 00:14:43.868 --- 10.0.0.1 ping statistics --- 00:14:43.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.868 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:43.868 only one NIC for nvmf test 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.868 rmmod nvme_tcp 00:14:43.868 rmmod nvme_fabrics 00:14:43.868 rmmod nvme_keyring 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:43.868 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:43.869 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:43.869 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:43.869 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:43.869 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:43.869 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:43.869 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:43.869 01:20:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.869 01:20:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:43.869 01:20:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.828 01:20:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:45.828 00:14:45.828 real 0m7.955s 00:14:45.828 user 0m1.615s 00:14:45.828 sys 0m4.309s 00:14:46.086 01:20:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.086 01:20:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:46.086 ************************************ 00:14:46.086 END TEST nvmf_target_multipath 00:14:46.086 ************************************ 00:14:46.086 01:20:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:46.086 01:20:11 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:46.086 01:20:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:46.086 01:20:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.086 01:20:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:46.086 ************************************ 00:14:46.086 START TEST nvmf_zcopy 00:14:46.086 ************************************ 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:46.086 * Looking for test storage... 00:14:46.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.086 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.087 01:20:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.087 01:20:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:46.087 01:20:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:46.087 01:20:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:46.087 01:20:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:51.376 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:51.376 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:51.376 Found net devices under 0000:86:00.0: cvl_0_0 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:51.376 Found net devices under 0000:86:00.1: cvl_0_1 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.376 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:14:51.377 00:14:51.377 --- 10.0.0.2 ping statistics --- 00:14:51.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.377 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:14:51.377 00:14:51.377 --- 10.0.0.1 ping statistics --- 00:14:51.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.377 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3358373 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3358373 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3358373 ']' 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.377 01:20:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:51.377 [2024-07-16 01:20:17.360521] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:14:51.377 [2024-07-16 01:20:17.360564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.635 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.635 [2024-07-16 01:20:17.419540] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.635 [2024-07-16 01:20:17.496947] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.635 [2024-07-16 01:20:17.496982] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.635 [2024-07-16 01:20:17.496989] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.635 [2024-07-16 01:20:17.496998] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.635 [2024-07-16 01:20:17.497003] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.635 [2024-07-16 01:20:17.497020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.201 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.201 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:52.201 01:20:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:52.202 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:52.202 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:52.202 01:20:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:52.460 [2024-07-16 01:20:18.194969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:52.460 [2024-07-16 01:20:18.211090] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:52.460 malloc0 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:52.460 { 00:14:52.460 "params": { 00:14:52.460 "name": "Nvme$subsystem", 00:14:52.460 "trtype": "$TEST_TRANSPORT", 00:14:52.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:52.460 "adrfam": "ipv4", 00:14:52.460 "trsvcid": "$NVMF_PORT", 00:14:52.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:52.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:52.460 "hdgst": ${hdgst:-false}, 00:14:52.460 "ddgst": ${ddgst:-false} 00:14:52.460 }, 00:14:52.460 "method": "bdev_nvme_attach_controller" 00:14:52.460 } 00:14:52.460 EOF 00:14:52.460 )") 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:52.460 01:20:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:52.460 "params": { 00:14:52.460 "name": "Nvme1", 00:14:52.460 "trtype": "tcp", 00:14:52.460 "traddr": "10.0.0.2", 00:14:52.460 "adrfam": "ipv4", 00:14:52.460 "trsvcid": "4420", 00:14:52.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:52.460 "hdgst": false, 00:14:52.460 "ddgst": false 00:14:52.460 }, 00:14:52.460 "method": "bdev_nvme_attach_controller" 00:14:52.460 }' 00:14:52.460 [2024-07-16 01:20:18.291270] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:14:52.460 [2024-07-16 01:20:18.291312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3358521 ] 00:14:52.460 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.460 [2024-07-16 01:20:18.345471] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.460 [2024-07-16 01:20:18.417410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.719 Running I/O for 10 seconds... 00:15:04.919 00:15:04.919 Latency(us) 00:15:04.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:04.919 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:04.919 Verification LBA range: start 0x0 length 0x1000 00:15:04.919 Nvme1n1 : 10.01 8923.02 69.71 0.00 0.00 14303.52 526.63 24466.77 00:15:04.919 =================================================================================================================== 00:15:04.919 Total : 8923.02 69.71 0.00 0.00 14303.52 526.63 24466.77 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3360230 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:04.919 { 00:15:04.919 "params": { 00:15:04.919 "name": "Nvme$subsystem", 00:15:04.919 "trtype": "$TEST_TRANSPORT", 00:15:04.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:04.919 "adrfam": "ipv4", 00:15:04.919 "trsvcid": "$NVMF_PORT", 00:15:04.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:04.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:04.919 "hdgst": ${hdgst:-false}, 00:15:04.919 "ddgst": ${ddgst:-false} 00:15:04.919 }, 00:15:04.919 "method": "bdev_nvme_attach_controller" 00:15:04.919 } 00:15:04.919 EOF 00:15:04.919 )") 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:04.919 [2024-07-16 01:20:28.908289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.908318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:04.919 01:20:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:04.919 "params": { 00:15:04.919 "name": "Nvme1", 00:15:04.919 "trtype": "tcp", 00:15:04.919 "traddr": "10.0.0.2", 00:15:04.919 "adrfam": "ipv4", 00:15:04.919 "trsvcid": "4420", 00:15:04.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.919 "hdgst": false, 00:15:04.919 "ddgst": false 00:15:04.919 }, 00:15:04.919 "method": "bdev_nvme_attach_controller" 00:15:04.919 }' 00:15:04.919 [2024-07-16 01:20:28.916283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.916296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:28.924301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.924310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:28.932324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.932332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:28.940349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.940358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:28.944571] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:15:04.919 [2024-07-16 01:20:28.944609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3360230 ] 00:15:04.919 [2024-07-16 01:20:28.948372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.948382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:28.956392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.956401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:28.964414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.964422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.919 [2024-07-16 01:20:28.972437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.972446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:28.980459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.980467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:28.988480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.988488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:28.996501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:28.996510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.000265] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.919 [2024-07-16 01:20:29.004523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.004533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.012544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.012554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.020564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.020572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.028588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.028597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.036619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.036629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.044642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.044660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.052649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.052658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.060670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.060678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.068691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.068699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.075380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.919 [2024-07-16 01:20:29.076718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.076730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.084737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.084748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.092763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.092779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.100781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.100791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.108802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.108812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.116824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.116834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.124843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.124851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.919 [2024-07-16 01:20:29.132869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.919 [2024-07-16 01:20:29.132878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.140890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.140898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.148909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.148917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.156931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.156939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.164963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.164981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.172981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.172993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.181002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.181013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.189023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.189038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.197043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.197052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.205066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.205075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.213087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.213095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.221113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.221123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.229138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.229151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.237158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.237171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.245185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.245200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.253199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.253209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.300640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.300657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.305350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.305362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 Running I/O for 5 seconds... 00:15:04.920 [2024-07-16 01:20:29.313369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.313379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.324819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.324837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.333451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.333470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.342472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.342495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.351743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.351761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.360951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.360969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.369510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.369528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.378993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.379011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.387656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.387681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.397003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.397020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.405474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.405493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.415427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.415445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.424225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.424243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.433393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.433411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.442393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.442415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.451506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.451524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.460724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.460742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.469444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.469462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.478580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.478598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.487885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.487903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.496909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.496927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.506091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.506108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.515383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.515401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.524599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.524617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.533671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.533689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.542930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.542949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.552495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.552514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.561057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.561075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.570120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.570138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.578710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.578727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.587620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.587638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.597079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.597096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.605513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.605531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.615124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.615142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.623540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.623558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.632705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.632722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.641244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.641261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.650259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.650276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.659314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.659332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.920 [2024-07-16 01:20:29.668754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.920 [2024-07-16 01:20:29.668776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.677841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.677859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.686422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.686439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.695079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.695095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.704146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.704163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.713914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.713931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.722536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.722553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.731744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.731761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.740912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.740929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.749826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.749844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.758967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.758984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.768163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.768180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.774949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.774965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.785819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.785836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.794400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.794417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.803717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.803734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.812772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.812790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.821384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.821401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.829881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.829898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.839176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.839192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.847502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.847519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.856560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.856577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.865744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.865761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.874756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.874773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.883806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.883824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.892775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.892792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.902082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.902100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.910989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.911006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.920238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.920256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.929882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.929900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.938468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.938486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.947363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.947380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.956029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.956047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.965312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.965330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.974453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.974471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.981272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.981288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:29.992319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:29.992343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.001036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.001054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.011772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.011800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.021248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.021267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.030231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.030249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.038851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.038868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.048314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.048332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.056990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.057007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.065504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.065521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.074776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.074793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.084408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.084428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.093548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.093566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.102331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.102353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.111723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.111740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.121077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.121094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.129630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.129647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.138599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.138616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.147789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.147806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.156907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.156923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.165940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.921 [2024-07-16 01:20:30.165956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.921 [2024-07-16 01:20:30.175097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.175114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.184381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.184398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.193473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.193491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.202032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.202049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.211258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.211274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.220436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.220452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.230081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.230098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.239120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.239142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.248771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.248788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.257390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.257407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.267170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.267187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.275551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.275568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.283945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.283962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.293006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.293023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.302137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.302154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.311643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.311660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.321291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.321308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.329849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.329866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.338703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.338720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.347829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.347846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.356651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.356668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.365510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.365527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.374487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.374504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.383766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.383784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.392806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.392823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.401887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.401905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.410310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.410330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.419293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.419310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.428432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.428449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.437341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.437359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.446964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.446982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.453672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.453689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.464486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.464504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.473756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.473773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.482205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.482222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.491958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.491975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.501277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.501294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.511027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.511044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.519466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.519483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.528521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.528538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.537445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.537462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.546591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.546608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.555756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.555774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.564700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.922 [2024-07-16 01:20:30.564717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.922 [2024-07-16 01:20:30.574017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.574034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.583364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.583384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.592003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.592020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.601060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.601077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.610332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.610356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.619061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.619078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.628098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.628115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.637074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.637091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.646281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.646298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.655436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.655454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.663887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.663905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.673102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.673121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.682459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.682478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.691846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.691864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.701153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.701170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.710298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.710316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.719559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.719577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.728778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.728796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.738231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.738249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.747413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.747431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.757042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.757064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.766000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.766018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.775057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.775075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.783535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.783552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.792615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.792633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.801581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.801609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.810750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.810767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.820269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.820287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.828719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.828736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.837668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.837686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.847375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.847392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.855838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.855856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.864491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.864508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.873582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.873600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.882843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.882860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.891237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.891255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.923 [2024-07-16 01:20:30.900540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:04.923 [2024-07-16 01:20:30.900558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:30.910118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:30.910135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:30.919265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:30.919282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:30.928927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:30.928945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:30.937709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:30.937726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:30.946767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:30.946784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:30.955202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:30.955220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:30.964413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:30.964431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:30.973514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:30.973532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:30.983292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:30.983310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:30.992022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:30.992040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:31.001104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:31.001122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:31.010252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:31.010270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:31.019437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:31.019456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:31.028858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:31.028877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.182 [2024-07-16 01:20:31.038389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.182 [2024-07-16 01:20:31.038407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.046809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.046827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.055912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.055929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.064311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.064327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.072772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.072789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.081698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.081715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.090781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.090799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.099500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.099517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.108557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.108574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.117709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.117727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.126945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.126962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.136634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.136650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.146040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.146057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.154489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.154506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.183 [2024-07-16 01:20:31.163400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.183 [2024-07-16 01:20:31.163417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.172260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.172277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.181035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.181052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.189599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.189617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.199177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.199195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.208464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.208482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.216996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.217012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.225553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.225570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.234682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.234699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.243822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.243839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.253154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.253171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.262623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.262640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.271446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.271464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.278256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.278272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.289013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.289030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.298143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.298160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.307122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.307138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.316229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.316246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.325273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.325291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.333928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.333946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.343073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.343090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.352260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.352277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.361362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.361379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.369644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.369661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.378558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.378575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.387523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.387540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.397134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.397151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.406825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.406842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.415778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.415796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.441 [2024-07-16 01:20:31.424329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.441 [2024-07-16 01:20:31.424352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.433542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.433559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.443285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.443303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.451914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.451931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.460511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.460529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.469662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.469681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.478656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.478674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.487519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.487536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.496619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.496637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.505815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.505833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.514175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.514192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.522553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.522569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.531113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.531130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.540480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.540497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.699 [2024-07-16 01:20:31.548909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.699 [2024-07-16 01:20:31.548927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.558037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.558054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.567190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.567207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.575576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.575592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.584754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.584772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.593657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.593674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.603325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.603353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.612021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.612038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.621130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.621148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.630204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.630222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.638821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.638838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.647833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.647850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.656977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.656994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.666260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.666277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.674939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.674956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.700 [2024-07-16 01:20:31.683770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.700 [2024-07-16 01:20:31.683786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.692839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.692857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.702151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.702169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.710780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.710797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.719789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.719806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.729038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.729055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.738173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.738191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.747125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.747142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.756038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.756055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.765063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.765079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.774238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.774258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.782603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.782620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.791636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.791653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.800699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.800716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.809735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.809753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.818221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.818239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.827334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.827356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.836516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.836534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.845588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.845605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.854487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.854505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.863575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.863603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.872521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.872538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.881729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.881745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.890978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.890995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.900011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.900029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.909028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.909045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.917981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.917998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.926990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.927007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.958 [2024-07-16 01:20:31.936164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:05.958 [2024-07-16 01:20:31.936181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:31.945430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:31.945453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:31.954594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:31.954611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:31.963705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:31.963721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:31.972759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:31.972776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:31.982379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:31.982396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:31.991097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:31.991114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:31.999660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:31.999677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.009172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.009190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.017569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.017586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.026702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.026719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.036558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.036577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.046393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.046411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.054908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.054926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.064018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.064037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.073107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.073126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.082279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.082298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.091943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.091961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.100594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.100611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.109881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.109899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.119479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.119501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.128030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.128048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.137133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.137151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.146456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.146474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.155249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.155267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.163816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.163834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.172776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.172794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.182166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.182184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.190682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.190700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.217 [2024-07-16 01:20:32.199790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.217 [2024-07-16 01:20:32.199808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.476 [2024-07-16 01:20:32.208743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.476 [2024-07-16 01:20:32.208760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.476 [2024-07-16 01:20:32.217930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.476 [2024-07-16 01:20:32.217948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.476 [2024-07-16 01:20:32.226716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.476 [2024-07-16 01:20:32.226734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.476 [2024-07-16 01:20:32.235846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.476 [2024-07-16 01:20:32.235864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.476 [2024-07-16 01:20:32.245065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.476 [2024-07-16 01:20:32.245083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.476 [2024-07-16 01:20:32.254077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.476 [2024-07-16 01:20:32.254095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.476 [2024-07-16 01:20:32.263031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.263049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.272060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.272078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.281146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.281164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.290187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.290204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.299662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.299680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.308112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.308130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.317189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.317206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.325688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.325705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.334710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.334727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.343952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.343970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.352458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.352476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.361731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.361749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.370721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.370739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.379855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.379873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.388799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.388817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.397879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.397897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.406789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.406807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.415940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.415957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.425209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.425226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.434148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.434166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.442653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.442670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.451828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.451846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.477 [2024-07-16 01:20:32.460351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.477 [2024-07-16 01:20:32.460369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.735 [2024-07-16 01:20:32.469457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.735 [2024-07-16 01:20:32.469474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.735 [2024-07-16 01:20:32.478587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.735 [2024-07-16 01:20:32.478604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.735 [2024-07-16 01:20:32.487720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.735 [2024-07-16 01:20:32.487738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.735 [2024-07-16 01:20:32.496864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.735 [2024-07-16 01:20:32.496881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.735 [2024-07-16 01:20:32.505836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.505853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.514982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.515000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.524047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.524064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.533123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.533140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.541553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.541570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.550653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.550671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.559154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.559171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.568226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.568243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.577093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.577109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.583815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.583831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.594586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.594604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.603555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.603572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.611987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.612004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.620508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.620525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.629650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.629667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.638548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.638565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.647690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.647706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.656726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.656744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.665755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.665772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.674652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.674669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.683124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.683141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.692734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.692751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.701714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.701731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.710893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.710910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.736 [2024-07-16 01:20:32.720096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.736 [2024-07-16 01:20:32.720114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.729305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.729322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.738532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.738549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.747479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.747496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.756680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.756697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.765821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.765837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.775335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.775357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.784515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.784532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.794117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.794134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.803146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.803163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.812238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.812254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.821988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.822005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.831335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.831357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.839891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.839908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.848870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.848887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.857781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.857798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.866669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.866686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.875707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.875724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.885125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.885142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.893776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.893793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.902825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.902842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.912088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.912105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.921527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.921544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.930334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.930356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.939580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.939598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.949039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.949056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.958367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.958385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.995 [2024-07-16 01:20:32.968216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:06.995 [2024-07-16 01:20:32.968236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:32.983372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:32.983390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:32.990900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:32.990917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:32.999590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:32.999607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.008419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.008437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.016840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.016856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.026176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.026193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.035186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.035203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.044766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.044783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.054025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.054042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.063159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.063176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.071721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.071737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.080450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.080467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.089734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.089751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.098117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.098134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.107388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.107405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.115954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.115971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.124993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.125010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.133539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.133556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.140478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.140499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.150988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.254 [2024-07-16 01:20:33.151005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.254 [2024-07-16 01:20:33.159557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.255 [2024-07-16 01:20:33.159574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.255 [2024-07-16 01:20:33.168795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.255 [2024-07-16 01:20:33.168812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.255 [2024-07-16 01:20:33.177290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.255 [2024-07-16 01:20:33.177307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.255 [2024-07-16 01:20:33.186469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.255 [2024-07-16 01:20:33.186486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.255 [2024-07-16 01:20:33.195344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.255 [2024-07-16 01:20:33.195361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.255 [2024-07-16 01:20:33.204448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.255 [2024-07-16 01:20:33.204466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.255 [2024-07-16 01:20:33.213836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.255 [2024-07-16 01:20:33.213854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.255 [2024-07-16 01:20:33.222897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.255 [2024-07-16 01:20:33.222914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.255 [2024-07-16 01:20:33.231491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.255 [2024-07-16 01:20:33.231508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.255 [2024-07-16 01:20:33.240596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.255 [2024-07-16 01:20:33.240613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.249765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.513 [2024-07-16 01:20:33.249783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.258755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.513 [2024-07-16 01:20:33.258772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.268347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.513 [2024-07-16 01:20:33.268380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.276879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.513 [2024-07-16 01:20:33.276896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.285894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.513 [2024-07-16 01:20:33.285911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.295365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.513 [2024-07-16 01:20:33.295382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.304625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.513 [2024-07-16 01:20:33.304643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.313739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.513 [2024-07-16 01:20:33.313759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.322988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.513 [2024-07-16 01:20:33.323006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.331934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.513 [2024-07-16 01:20:33.331952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.341174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.513 [2024-07-16 01:20:33.341191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.513 [2024-07-16 01:20:33.350314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.350331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.358871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.358888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.367291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.367307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.376417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.376434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.385434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.385451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.394712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.394730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.403639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.403656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.412889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.412906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.422145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.422161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.432179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.432197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.440790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.440808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.450041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.450059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.459065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.459083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.467879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.467896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.476966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.476984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.486131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.486153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.514 [2024-07-16 01:20:33.495972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.514 [2024-07-16 01:20:33.495991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.505105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.505124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.514208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.514227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.523371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.523389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.532344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.532362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.541609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.541626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.550684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.550701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.559218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.559236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.568258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.568275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.577308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.577326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.585855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.585872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.595006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.595023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.603461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.603478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.612557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.612574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.621054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.621071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.630034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.630051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.639720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.639737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.648941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.648958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.657574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.657593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.666772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.666789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.675362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.675380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.684497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.684514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.693621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.693639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.702788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.702805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.711823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.711840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.720229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.720247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.729689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.729707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.738221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.738239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.746665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.746682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:07.772 [2024-07-16 01:20:33.755776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:07.772 [2024-07-16 01:20:33.755794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.030 [2024-07-16 01:20:33.765377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.030 [2024-07-16 01:20:33.765395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.773734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.773752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.782927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.782944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.792709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.792727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.802442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.802461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.811441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.811459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.820554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.820571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.829663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.829681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.838844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.838861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.848026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.848043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.857348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.857381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.865865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.865883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.874404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.874421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.883439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.883456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.893140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.893157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.901805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.901822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.911274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.911291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.920409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.920426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.929884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.929901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.938408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.938426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.947281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.947298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.956374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.956391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.965757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.965774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.974939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.974957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.983392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.983410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:33.992521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:33.992538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:34.000922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:34.000939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.031 [2024-07-16 01:20:34.009517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.031 [2024-07-16 01:20:34.009534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.019123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.019140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.028407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.028424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.037507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.037524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.046536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.046553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.055635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.055653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.064727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.064743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.073857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.073874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.082935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.082951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.091918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.091935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.101079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.101095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.109543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.109559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.118000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.118017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.127092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.127109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.135692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.135709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.144784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.144801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.153749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.153766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.162375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.162393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.171491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.171508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.181138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.181155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.189850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.189868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.199028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.199046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.208425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.208442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.217775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.217792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.226224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.226241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.235477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.235494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.244618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.244635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.289 [2024-07-16 01:20:34.253247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.289 [2024-07-16 01:20:34.253264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.290 [2024-07-16 01:20:34.261923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.290 [2024-07-16 01:20:34.261940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.290 [2024-07-16 01:20:34.270506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.290 [2024-07-16 01:20:34.270523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.279582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.279599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.286378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.286395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.297079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.297097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.306388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.306406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.315541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.315559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.324647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.324665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.331075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.331095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 00:15:08.548 Latency(us) 00:15:08.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.548 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:08.548 Nvme1n1 : 5.01 17044.85 133.16 0.00 0.00 7503.18 3120.76 18100.42 00:15:08.548 =================================================================================================================== 00:15:08.548 Total : 17044.85 133.16 0.00 0.00 7503.18 3120.76 18100.42 00:15:08.548 [2024-07-16 01:20:34.339092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.339105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.347112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.347124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.355138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.355148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.363164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.363178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.371177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.371189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.379200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.379211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.387220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.387230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.395243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.395253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.403264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.403275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.411286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.411298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.419307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.419317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.427329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.427344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.435356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.435370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.443374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.443383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.451393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.451401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.459417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.459437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.467437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.467447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.475458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.475466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.483480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.483488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.491502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.491511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.499523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.499532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 [2024-07-16 01:20:34.507544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:08.548 [2024-07-16 01:20:34.507553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:08.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3360230) - No such process 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3360230 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.548 delay0 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.548 01:20:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:08.806 01:20:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.806 01:20:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:08.806 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.806 [2024-07-16 01:20:34.634244] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:15.361 Initializing NVMe Controllers 00:15:15.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:15.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:15.361 Initialization complete. Launching workers. 00:15:15.361 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1282 00:15:15.361 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1555, failed to submit 47 00:15:15.361 success 1374, unsuccess 181, failed 0 00:15:15.361 01:20:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:15.361 01:20:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:15.361 01:20:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.361 01:20:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:15.361 01:20:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.361 01:20:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:15.361 01:20:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.361 01:20:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.361 rmmod nvme_tcp 00:15:15.361 rmmod nvme_fabrics 00:15:15.361 rmmod nvme_keyring 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3358373 ']' 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3358373 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3358373 ']' 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3358373 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3358373 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3358373' 00:15:15.361 killing process with pid 3358373 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3358373 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3358373 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.361 01:20:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.933 01:20:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:17.933 00:15:17.933 real 0m31.449s 00:15:17.933 user 0m43.338s 00:15:17.933 sys 0m10.484s 00:15:17.933 01:20:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:17.933 01:20:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:17.933 ************************************ 00:15:17.933 END TEST nvmf_zcopy 00:15:17.933 ************************************ 00:15:17.933 01:20:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:17.933 01:20:43 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:17.933 01:20:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:17.933 01:20:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.933 01:20:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.933 ************************************ 00:15:17.933 START TEST nvmf_nmic 00:15:17.933 ************************************ 00:15:17.933 01:20:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:17.933 * Looking for test storage... 00:15:17.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:17.934 01:20:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:23.199 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:23.199 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:23.199 Found net devices under 0000:86:00.0: cvl_0_0 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:23.199 Found net devices under 0000:86:00.1: cvl_0_1 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:23.199 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:23.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:15:23.200 00:15:23.200 --- 10.0.0.2 ping statistics --- 00:15:23.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.200 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:23.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:15:23.200 00:15:23.200 --- 10.0.0.1 ping statistics --- 00:15:23.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.200 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3365578 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3365578 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3365578 ']' 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.200 01:20:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.200 [2024-07-16 01:20:48.783620] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:15:23.200 [2024-07-16 01:20:48.783663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.200 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.200 [2024-07-16 01:20:48.841832] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.200 [2024-07-16 01:20:48.922628] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.200 [2024-07-16 01:20:48.922663] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.200 [2024-07-16 01:20:48.922670] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.200 [2024-07-16 01:20:48.922676] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.200 [2024-07-16 01:20:48.922681] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.200 [2024-07-16 01:20:48.922738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.200 [2024-07-16 01:20:48.922831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.200 [2024-07-16 01:20:48.922916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.200 [2024-07-16 01:20:48.922917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.766 [2024-07-16 01:20:49.629201] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.766 Malloc0 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.766 [2024-07-16 01:20:49.676806] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:23.766 test case1: single bdev can't be used in multiple subsystems 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.766 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.766 [2024-07-16 01:20:49.700710] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:23.766 [2024-07-16 01:20:49.700727] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:23.766 [2024-07-16 01:20:49.700734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.766 request: 00:15:23.766 { 00:15:23.766 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:23.766 "namespace": { 00:15:23.766 "bdev_name": "Malloc0", 00:15:23.767 "no_auto_visible": false 00:15:23.767 }, 00:15:23.767 "method": "nvmf_subsystem_add_ns", 00:15:23.767 "req_id": 1 00:15:23.767 } 00:15:23.767 Got JSON-RPC error response 00:15:23.767 response: 00:15:23.767 { 00:15:23.767 "code": -32602, 00:15:23.767 "message": "Invalid parameters" 00:15:23.767 } 00:15:23.767 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:23.767 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:23.767 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:23.767 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:23.767 Adding namespace failed - expected result. 00:15:23.767 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:23.767 test case2: host connect to nvmf target in multiple paths 00:15:23.767 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:23.767 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.767 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:23.767 [2024-07-16 01:20:49.712838] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:23.767 01:20:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.767 01:20:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:25.142 01:20:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:26.150 01:20:52 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:26.150 01:20:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:26.150 01:20:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.150 01:20:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:26.150 01:20:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:28.680 01:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:28.680 01:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:28.680 01:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:28.680 01:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:28.680 01:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:28.680 01:20:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:28.680 01:20:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:28.680 [global] 00:15:28.680 thread=1 00:15:28.680 invalidate=1 00:15:28.680 rw=write 00:15:28.680 time_based=1 00:15:28.680 runtime=1 00:15:28.680 ioengine=libaio 00:15:28.680 direct=1 00:15:28.680 bs=4096 00:15:28.680 iodepth=1 00:15:28.680 norandommap=0 00:15:28.680 numjobs=1 00:15:28.680 00:15:28.680 verify_dump=1 00:15:28.680 verify_backlog=512 00:15:28.680 verify_state_save=0 00:15:28.680 do_verify=1 00:15:28.680 verify=crc32c-intel 00:15:28.680 [job0] 00:15:28.680 filename=/dev/nvme0n1 00:15:28.680 Could not set queue depth (nvme0n1) 00:15:28.680 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:28.680 fio-3.35 00:15:28.680 Starting 1 thread 00:15:29.611 00:15:29.611 job0: (groupid=0, jobs=1): err= 0: pid=3366661: Tue Jul 16 01:20:55 2024 00:15:29.611 read: IOPS=2209, BW=8839KiB/s (9051kB/s)(8848KiB/1001msec) 00:15:29.611 slat (nsec): min=6858, max=37570, avg=7890.64, stdev=1356.76 00:15:29.611 clat (usec): min=197, max=427, avg=233.92, stdev=18.81 00:15:29.611 lat (usec): min=204, max=450, avg=241.81, stdev=19.00 00:15:29.611 clat percentiles (usec): 00:15:29.611 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 221], 00:15:29.611 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 231], 00:15:29.611 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:15:29.611 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 367], 99.95th=[ 371], 00:15:29.611 | 99.99th=[ 429] 00:15:29.611 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:29.611 slat (nsec): min=10064, max=45918, avg=11452.08, stdev=1953.56 00:15:29.611 clat (usec): min=118, max=831, avg=164.41, stdev=38.72 00:15:29.611 lat (usec): min=133, max=842, avg=175.86, stdev=38.83 00:15:29.611 clat percentiles (usec): 00:15:29.611 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:15:29.611 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:15:29.611 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 247], 95.00th=[ 255], 00:15:29.611 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 326], 99.95th=[ 326], 00:15:29.611 | 99.99th=[ 832] 00:15:29.611 bw ( KiB/s): min=10320, max=10320, per=100.00%, avg=10320.00, stdev= 0.00, samples=1 00:15:29.611 iops : min= 2580, max= 2580, avg=2580.00, stdev= 0.00, samples=1 00:15:29.611 lat (usec) : 250=83.91%, 500=16.07%, 1000=0.02% 00:15:29.611 cpu : usr=4.50%, sys=7.00%, ctx=4772, majf=0, minf=2 00:15:29.611 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.611 issued rwts: total=2212,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.611 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.611 00:15:29.611 Run status group 0 (all jobs): 00:15:29.611 READ: bw=8839KiB/s (9051kB/s), 8839KiB/s-8839KiB/s (9051kB/s-9051kB/s), io=8848KiB (9060kB), run=1001-1001msec 00:15:29.611 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:15:29.611 00:15:29.611 Disk stats (read/write): 00:15:29.611 nvme0n1: ios=2098/2178, merge=0/0, ticks=473/344, in_queue=817, util=91.88% 00:15:29.611 01:20:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.869 rmmod nvme_tcp 00:15:29.869 rmmod nvme_fabrics 00:15:29.869 rmmod nvme_keyring 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3365578 ']' 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3365578 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3365578 ']' 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3365578 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3365578 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3365578' 00:15:29.869 killing process with pid 3365578 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3365578 00:15:29.869 01:20:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3365578 00:15:30.126 01:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.126 01:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.126 01:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.126 01:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.126 01:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.126 01:20:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.126 01:20:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.126 01:20:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.656 01:20:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:32.656 00:15:32.656 real 0m14.726s 00:15:32.656 user 0m35.208s 00:15:32.656 sys 0m4.733s 00:15:32.656 01:20:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:32.656 01:20:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:32.656 ************************************ 00:15:32.656 END TEST nvmf_nmic 00:15:32.656 ************************************ 00:15:32.656 01:20:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:32.656 01:20:58 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:32.656 01:20:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:32.656 01:20:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.656 01:20:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:32.656 ************************************ 00:15:32.656 START TEST nvmf_fio_target 00:15:32.656 ************************************ 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:32.656 * Looking for test storage... 00:15:32.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:32.656 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.657 01:20:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.657 01:20:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.657 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:32.657 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:32.657 01:20:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:32.657 01:20:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:37.950 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:37.950 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:37.950 Found net devices under 0000:86:00.0: cvl_0_0 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:37.950 Found net devices under 0000:86:00.1: cvl_0_1 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:37.950 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:37.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:15:37.951 00:15:37.951 --- 10.0.0.2 ping statistics --- 00:15:37.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.951 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:37.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:15:37.951 00:15:37.951 --- 10.0.0.1 ping statistics --- 00:15:37.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.951 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3370539 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3370539 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3370539 ']' 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.951 01:21:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.951 [2024-07-16 01:21:03.606231] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:15:37.951 [2024-07-16 01:21:03.606276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.951 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.951 [2024-07-16 01:21:03.663189] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:37.951 [2024-07-16 01:21:03.743468] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.951 [2024-07-16 01:21:03.743505] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.951 [2024-07-16 01:21:03.743512] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.951 [2024-07-16 01:21:03.743517] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.951 [2024-07-16 01:21:03.743522] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.951 [2024-07-16 01:21:03.743564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.951 [2024-07-16 01:21:03.743659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.951 [2024-07-16 01:21:03.743745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.951 [2024-07-16 01:21:03.743746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.516 01:21:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.516 01:21:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:38.516 01:21:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:38.516 01:21:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:38.516 01:21:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.516 01:21:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.516 01:21:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:38.774 [2024-07-16 01:21:04.613895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.774 01:21:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:39.031 01:21:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:39.031 01:21:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:39.031 01:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:39.289 01:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:39.289 01:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:39.289 01:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:39.547 01:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:39.547 01:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:39.805 01:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:39.805 01:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:39.805 01:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:40.063 01:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:40.063 01:21:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:40.321 01:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:40.321 01:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:40.579 01:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:40.579 01:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:40.579 01:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:40.837 01:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:40.837 01:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:41.095 01:21:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.095 [2024-07-16 01:21:07.026663] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.095 01:21:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:41.353 01:21:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:41.611 01:21:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:42.986 01:21:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:42.986 01:21:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:42.986 01:21:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.986 01:21:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:42.986 01:21:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:42.986 01:21:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:44.919 01:21:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:44.919 01:21:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:44.919 01:21:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.919 01:21:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:44.919 01:21:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.919 01:21:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:44.919 01:21:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:44.919 [global] 00:15:44.919 thread=1 00:15:44.919 invalidate=1 00:15:44.919 rw=write 00:15:44.919 time_based=1 00:15:44.919 runtime=1 00:15:44.919 ioengine=libaio 00:15:44.919 direct=1 00:15:44.919 bs=4096 00:15:44.919 iodepth=1 00:15:44.919 norandommap=0 00:15:44.919 numjobs=1 00:15:44.919 00:15:44.919 verify_dump=1 00:15:44.919 verify_backlog=512 00:15:44.919 verify_state_save=0 00:15:44.919 do_verify=1 00:15:44.919 verify=crc32c-intel 00:15:44.919 [job0] 00:15:44.919 filename=/dev/nvme0n1 00:15:44.919 [job1] 00:15:44.919 filename=/dev/nvme0n2 00:15:44.919 [job2] 00:15:44.919 filename=/dev/nvme0n3 00:15:44.919 [job3] 00:15:44.919 filename=/dev/nvme0n4 00:15:44.919 Could not set queue depth (nvme0n1) 00:15:44.919 Could not set queue depth (nvme0n2) 00:15:44.919 Could not set queue depth (nvme0n3) 00:15:44.919 Could not set queue depth (nvme0n4) 00:15:45.178 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:45.178 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:45.178 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:45.178 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:45.178 fio-3.35 00:15:45.178 Starting 4 threads 00:15:46.545 00:15:46.545 job0: (groupid=0, jobs=1): err= 0: pid=3372271: Tue Jul 16 01:21:12 2024 00:15:46.545 read: IOPS=53, BW=216KiB/s (221kB/s)(224KiB/1038msec) 00:15:46.545 slat (nsec): min=6823, max=22584, avg=10240.66, stdev=4496.68 00:15:46.545 clat (usec): min=213, max=41432, avg=16292.97, stdev=20007.97 00:15:46.545 lat (usec): min=221, max=41441, avg=16303.21, stdev=20009.67 00:15:46.545 clat percentiles (usec): 00:15:46.545 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 249], 20.00th=[ 269], 00:15:46.545 | 30.00th=[ 285], 40.00th=[ 388], 50.00th=[ 486], 60.00th=[ 515], 00:15:46.545 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:46.546 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:46.546 | 99.99th=[41681] 00:15:46.546 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:15:46.546 slat (usec): min=9, max=10073, avg=31.66, stdev=444.67 00:15:46.546 clat (usec): min=126, max=1320, avg=209.43, stdev=66.17 00:15:46.546 lat (usec): min=136, max=10267, avg=241.09, stdev=449.02 00:15:46.546 clat percentiles (usec): 00:15:46.546 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 157], 20.00th=[ 172], 00:15:46.546 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 202], 60.00th=[ 217], 00:15:46.546 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 302], 00:15:46.546 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 1319], 99.95th=[ 1319], 00:15:46.546 | 99.99th=[ 1319] 00:15:46.546 bw ( KiB/s): min= 4096, max= 4096, per=20.76%, avg=4096.00, stdev= 0.00, samples=1 00:15:46.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:46.546 lat (usec) : 250=79.75%, 500=15.49%, 750=0.70% 00:15:46.546 lat (msec) : 2=0.18%, 50=3.87% 00:15:46.546 cpu : usr=0.19%, sys=0.68%, ctx=574, majf=0, minf=2 00:15:46.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.546 issued rwts: total=56,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:46.546 job1: (groupid=0, jobs=1): err= 0: pid=3372272: Tue Jul 16 01:21:12 2024 00:15:46.546 read: IOPS=105, BW=424KiB/s (434kB/s)(424KiB/1001msec) 00:15:46.546 slat (nsec): min=6442, max=23072, avg=10637.64, stdev=5615.17 00:15:46.546 clat (usec): min=245, max=41487, avg=8419.09, stdev=16258.48 00:15:46.546 lat (usec): min=251, max=41496, avg=8429.72, stdev=16263.21 00:15:46.546 clat percentiles (usec): 00:15:46.546 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 273], 00:15:46.546 | 30.00th=[ 314], 40.00th=[ 400], 50.00th=[ 433], 60.00th=[ 457], 00:15:46.546 | 70.00th=[ 486], 80.00th=[ 594], 90.00th=[41157], 95.00th=[41157], 00:15:46.546 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:46.546 | 99.99th=[41681] 00:15:46.546 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:15:46.546 slat (nsec): min=9000, max=38041, avg=10378.38, stdev=2127.94 00:15:46.546 clat (usec): min=131, max=1374, avg=196.60, stdev=65.70 00:15:46.546 lat (usec): min=141, max=1384, avg=206.98, stdev=65.70 00:15:46.546 clat percentiles (usec): 00:15:46.546 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 159], 00:15:46.546 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 186], 60.00th=[ 198], 00:15:46.546 | 70.00th=[ 215], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 265], 00:15:46.546 | 99.00th=[ 318], 99.50th=[ 355], 99.90th=[ 1369], 99.95th=[ 1369], 00:15:46.546 | 99.99th=[ 1369] 00:15:46.546 bw ( KiB/s): min= 4096, max= 4096, per=20.76%, avg=4096.00, stdev= 0.00, samples=1 00:15:46.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:46.546 lat (usec) : 250=75.24%, 500=20.23%, 750=0.97% 00:15:46.546 lat (msec) : 2=0.16%, 50=3.40% 00:15:46.546 cpu : usr=0.40%, sys=0.60%, ctx=618, majf=0, minf=1 00:15:46.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.546 issued rwts: total=106,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:46.546 job2: (groupid=0, jobs=1): err= 0: pid=3372273: Tue Jul 16 01:21:12 2024 00:15:46.546 read: IOPS=2238, BW=8955KiB/s (9170kB/s)(8964KiB/1001msec) 00:15:46.546 slat (nsec): min=6143, max=30921, avg=7465.85, stdev=1149.07 00:15:46.546 clat (usec): min=177, max=3915, avg=242.63, stdev=87.63 00:15:46.546 lat (usec): min=184, max=3922, avg=250.09, stdev=87.81 00:15:46.546 clat percentiles (usec): 00:15:46.546 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:15:46.546 | 30.00th=[ 206], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 251], 00:15:46.546 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 297], 00:15:46.546 | 99.00th=[ 310], 99.50th=[ 355], 99.90th=[ 510], 99.95th=[ 515], 00:15:46.546 | 99.99th=[ 3916] 00:15:46.546 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:46.546 slat (nsec): min=8985, max=41387, avg=10715.67, stdev=1822.15 00:15:46.546 clat (usec): min=120, max=3936, avg=156.21, stdev=78.68 00:15:46.546 lat (usec): min=130, max=3945, avg=166.93, stdev=78.94 00:15:46.546 clat percentiles (usec): 00:15:46.546 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 135], 00:15:46.546 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 159], 00:15:46.546 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 196], 00:15:46.546 | 99.00th=[ 241], 99.50th=[ 253], 99.90th=[ 314], 99.95th=[ 334], 00:15:46.546 | 99.99th=[ 3949] 00:15:46.546 bw ( KiB/s): min= 9072, max= 9072, per=45.98%, avg=9072.00, stdev= 0.00, samples=1 00:15:46.546 iops : min= 2268, max= 2268, avg=2268.00, stdev= 0.00, samples=1 00:15:46.546 lat (usec) : 250=80.40%, 500=19.45%, 750=0.10% 00:15:46.546 lat (msec) : 4=0.04% 00:15:46.546 cpu : usr=3.00%, sys=5.50%, ctx=4802, majf=0, minf=1 00:15:46.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.546 issued rwts: total=2241,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:46.546 job3: (groupid=0, jobs=1): err= 0: pid=3372274: Tue Jul 16 01:21:12 2024 00:15:46.546 read: IOPS=1024, BW=4099KiB/s (4197kB/s)(4136KiB/1009msec) 00:15:46.546 slat (nsec): min=7236, max=43739, avg=9757.97, stdev=4707.74 00:15:46.546 clat (usec): min=186, max=41959, avg=676.54, stdev=4196.13 00:15:46.546 lat (usec): min=194, max=41982, avg=686.30, stdev=4197.22 00:15:46.546 clat percentiles (usec): 00:15:46.546 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:15:46.546 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:15:46.546 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:15:46.546 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:15:46.546 | 99.99th=[42206] 00:15:46.546 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:15:46.546 slat (nsec): min=9151, max=50704, avg=13363.25, stdev=5083.99 00:15:46.546 clat (usec): min=122, max=451, avg=175.63, stdev=30.86 00:15:46.546 lat (usec): min=140, max=462, avg=189.00, stdev=31.05 00:15:46.546 clat percentiles (usec): 00:15:46.546 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:15:46.546 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:15:46.546 | 70.00th=[ 180], 80.00th=[ 188], 90.00th=[ 204], 95.00th=[ 243], 00:15:46.546 | 99.00th=[ 293], 99.50th=[ 343], 99.90th=[ 433], 99.95th=[ 453], 00:15:46.546 | 99.99th=[ 453] 00:15:46.546 bw ( KiB/s): min= 4096, max= 8192, per=31.14%, avg=6144.00, stdev=2896.31, samples=2 00:15:46.546 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:15:46.546 lat (usec) : 250=85.95%, 500=13.62% 00:15:46.546 lat (msec) : 50=0.43% 00:15:46.546 cpu : usr=2.58%, sys=3.77%, ctx=2570, majf=0, minf=1 00:15:46.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.546 issued rwts: total=1034,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:46.546 00:15:46.546 Run status group 0 (all jobs): 00:15:46.546 READ: bw=12.9MiB/s (13.6MB/s), 216KiB/s-8955KiB/s (221kB/s-9170kB/s), io=13.4MiB (14.1MB), run=1001-1038msec 00:15:46.546 WRITE: bw=19.3MiB/s (20.2MB/s), 1973KiB/s-9.99MiB/s (2020kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1038msec 00:15:46.546 00:15:46.546 Disk stats (read/write): 00:15:46.546 nvme0n1: ios=104/512, merge=0/0, ticks=1020/101, in_queue=1121, util=98.10% 00:15:46.546 nvme0n2: ios=124/512, merge=0/0, ticks=742/97, in_queue=839, util=87.37% 00:15:46.546 nvme0n3: ios=1977/2048, merge=0/0, ticks=722/317, in_queue=1039, util=91.45% 00:15:46.546 nvme0n4: ios=1053/1536, merge=0/0, ticks=1087/244, in_queue=1331, util=95.27% 00:15:46.546 01:21:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:46.546 [global] 00:15:46.546 thread=1 00:15:46.546 invalidate=1 00:15:46.546 rw=randwrite 00:15:46.546 time_based=1 00:15:46.546 runtime=1 00:15:46.546 ioengine=libaio 00:15:46.546 direct=1 00:15:46.546 bs=4096 00:15:46.546 iodepth=1 00:15:46.546 norandommap=0 00:15:46.546 numjobs=1 00:15:46.546 00:15:46.546 verify_dump=1 00:15:46.546 verify_backlog=512 00:15:46.546 verify_state_save=0 00:15:46.546 do_verify=1 00:15:46.546 verify=crc32c-intel 00:15:46.546 [job0] 00:15:46.546 filename=/dev/nvme0n1 00:15:46.546 [job1] 00:15:46.546 filename=/dev/nvme0n2 00:15:46.546 [job2] 00:15:46.546 filename=/dev/nvme0n3 00:15:46.546 [job3] 00:15:46.546 filename=/dev/nvme0n4 00:15:46.546 Could not set queue depth (nvme0n1) 00:15:46.546 Could not set queue depth (nvme0n2) 00:15:46.546 Could not set queue depth (nvme0n3) 00:15:46.546 Could not set queue depth (nvme0n4) 00:15:46.546 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.546 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.546 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.546 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:46.546 fio-3.35 00:15:46.546 Starting 4 threads 00:15:47.913 00:15:47.913 job0: (groupid=0, jobs=1): err= 0: pid=3372660: Tue Jul 16 01:21:13 2024 00:15:47.913 read: IOPS=25, BW=103KiB/s (105kB/s)(104KiB/1011msec) 00:15:47.913 slat (nsec): min=8674, max=26741, avg=18072.88, stdev=6188.51 00:15:47.913 clat (usec): min=212, max=41102, avg=34646.51, stdev=14965.26 00:15:47.913 lat (usec): min=233, max=41123, avg=34664.58, stdev=14963.59 00:15:47.913 clat percentiles (usec): 00:15:47.913 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 245], 20.00th=[40633], 00:15:47.913 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:47.913 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:47.913 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:47.913 | 99.99th=[41157] 00:15:47.913 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:15:47.913 slat (nsec): min=4414, max=31744, avg=10831.38, stdev=3820.27 00:15:47.913 clat (usec): min=135, max=3849, avg=199.46, stdev=165.55 00:15:47.913 lat (usec): min=146, max=3868, avg=210.30, stdev=165.65 00:15:47.913 clat percentiles (usec): 00:15:47.913 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:15:47.913 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 192], 60.00th=[ 208], 00:15:47.913 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 239], 95.00th=[ 253], 00:15:47.913 | 99.00th=[ 285], 99.50th=[ 318], 99.90th=[ 3851], 99.95th=[ 3851], 00:15:47.913 | 99.99th=[ 3851] 00:15:47.913 bw ( KiB/s): min= 4096, max= 4096, per=25.43%, avg=4096.00, stdev= 0.00, samples=1 00:15:47.913 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:47.913 lat (usec) : 250=90.33%, 500=5.39% 00:15:47.913 lat (msec) : 4=0.19%, 50=4.09% 00:15:47.913 cpu : usr=0.40%, sys=0.40%, ctx=540, majf=0, minf=2 00:15:47.913 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.913 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.913 job1: (groupid=0, jobs=1): err= 0: pid=3372668: Tue Jul 16 01:21:13 2024 00:15:47.913 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:15:47.913 slat (nsec): min=9119, max=22841, avg=21841.09, stdev=2860.12 00:15:47.913 clat (usec): min=40836, max=41913, avg=41031.22, stdev=211.88 00:15:47.913 lat (usec): min=40859, max=41935, avg=41053.06, stdev=211.38 00:15:47.913 clat percentiles (usec): 00:15:47.913 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:47.913 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:47.913 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:47.913 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:47.913 | 99.99th=[41681] 00:15:47.913 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:15:47.913 slat (nsec): min=8698, max=40569, avg=10117.88, stdev=2142.51 00:15:47.913 clat (usec): min=140, max=292, avg=200.94, stdev=27.10 00:15:47.913 lat (usec): min=154, max=304, avg=211.06, stdev=27.33 00:15:47.913 clat percentiles (usec): 00:15:47.913 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 178], 00:15:47.913 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 200], 60.00th=[ 208], 00:15:47.913 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 251], 00:15:47.913 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 293], 99.95th=[ 293], 00:15:47.913 | 99.99th=[ 293] 00:15:47.913 bw ( KiB/s): min= 4096, max= 4096, per=25.43%, avg=4096.00, stdev= 0.00, samples=1 00:15:47.913 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:47.913 lat (usec) : 250=90.26%, 500=5.62% 00:15:47.913 lat (msec) : 50=4.12% 00:15:47.913 cpu : usr=0.30%, sys=0.49%, ctx=534, majf=0, minf=1 00:15:47.913 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.913 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.913 job2: (groupid=0, jobs=1): err= 0: pid=3372681: Tue Jul 16 01:21:13 2024 00:15:47.913 read: IOPS=2156, BW=8625KiB/s (8832kB/s)(8772KiB/1017msec) 00:15:47.913 slat (nsec): min=6460, max=26818, avg=7277.53, stdev=885.40 00:15:47.913 clat (usec): min=167, max=40979, avg=261.69, stdev=1228.11 00:15:47.913 lat (usec): min=174, max=41001, avg=268.96, stdev=1228.39 00:15:47.913 clat percentiles (usec): 00:15:47.913 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:15:47.913 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 235], 60.00th=[ 247], 00:15:47.913 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 262], 95.00th=[ 265], 00:15:47.913 | 99.00th=[ 277], 99.50th=[ 314], 99.90th=[ 412], 99.95th=[40633], 00:15:47.913 | 99.99th=[41157] 00:15:47.913 write: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(10.0MiB/1017msec); 0 zone resets 00:15:47.913 slat (nsec): min=8907, max=37608, avg=9940.03, stdev=1192.00 00:15:47.913 clat (usec): min=115, max=363, avg=152.45, stdev=33.21 00:15:47.913 lat (usec): min=125, max=401, avg=162.39, stdev=33.37 00:15:47.913 clat percentiles (usec): 00:15:47.913 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 130], 00:15:47.913 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 145], 00:15:47.913 | 70.00th=[ 153], 80.00th=[ 172], 90.00th=[ 208], 95.00th=[ 233], 00:15:47.913 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 289], 99.95th=[ 334], 00:15:47.913 | 99.99th=[ 363] 00:15:47.913 bw ( KiB/s): min= 8584, max=11896, per=63.56%, avg=10240.00, stdev=2341.94, samples=2 00:15:47.913 iops : min= 2146, max= 2974, avg=2560.00, stdev=585.48, samples=2 00:15:47.913 lat (usec) : 250=84.39%, 500=15.57% 00:15:47.913 lat (msec) : 50=0.04% 00:15:47.913 cpu : usr=2.17%, sys=4.33%, ctx=4753, majf=0, minf=1 00:15:47.913 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.913 issued rwts: total=2193,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.913 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.913 job3: (groupid=0, jobs=1): err= 0: pid=3372687: Tue Jul 16 01:21:13 2024 00:15:47.913 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:15:47.913 slat (nsec): min=9977, max=22249, avg=11124.18, stdev=2560.21 00:15:47.913 clat (usec): min=40747, max=41380, avg=41008.60, stdev=134.84 00:15:47.913 lat (usec): min=40769, max=41390, avg=41019.72, stdev=133.89 00:15:47.913 clat percentiles (usec): 00:15:47.913 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:47.913 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:47.913 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:47.913 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:47.913 | 99.99th=[41157] 00:15:47.913 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:15:47.913 slat (nsec): min=10222, max=92740, avg=14671.48, stdev=5852.91 00:15:47.913 clat (usec): min=137, max=538, avg=190.97, stdev=35.82 00:15:47.913 lat (usec): min=149, max=574, avg=205.65, stdev=36.82 00:15:47.913 clat percentiles (usec): 00:15:47.914 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 167], 00:15:47.914 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:15:47.914 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 225], 95.00th=[ 237], 00:15:47.914 | 99.00th=[ 269], 99.50th=[ 420], 99.90th=[ 537], 99.95th=[ 537], 00:15:47.914 | 99.99th=[ 537] 00:15:47.914 bw ( KiB/s): min= 4096, max= 4096, per=25.43%, avg=4096.00, stdev= 0.00, samples=1 00:15:47.914 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:47.914 lat (usec) : 250=93.82%, 500=1.69%, 750=0.37% 00:15:47.914 lat (msec) : 50=4.12% 00:15:47.914 cpu : usr=0.30%, sys=0.99%, ctx=535, majf=0, minf=1 00:15:47.914 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:47.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.914 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.914 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:47.914 00:15:47.914 Run status group 0 (all jobs): 00:15:47.914 READ: bw=8901KiB/s (9114kB/s), 86.9KiB/s-8625KiB/s (89.0kB/s-8832kB/s), io=9052KiB (9269kB), run=1010-1017msec 00:15:47.914 WRITE: bw=15.7MiB/s (16.5MB/s), 2022KiB/s-9.83MiB/s (2070kB/s-10.3MB/s), io=16.0MiB (16.8MB), run=1010-1017msec 00:15:47.914 00:15:47.914 Disk stats (read/write): 00:15:47.914 nvme0n1: ios=48/512, merge=0/0, ticks=1602/101, in_queue=1703, util=85.47% 00:15:47.914 nvme0n2: ios=68/512, merge=0/0, ticks=801/100, in_queue=901, util=90.94% 00:15:47.914 nvme0n3: ios=2105/2255, merge=0/0, ticks=527/335, in_queue=862, util=94.68% 00:15:47.914 nvme0n4: ios=75/512, merge=0/0, ticks=866/92, in_queue=958, util=94.21% 00:15:47.914 01:21:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:47.914 [global] 00:15:47.914 thread=1 00:15:47.914 invalidate=1 00:15:47.914 rw=write 00:15:47.914 time_based=1 00:15:47.914 runtime=1 00:15:47.914 ioengine=libaio 00:15:47.914 direct=1 00:15:47.914 bs=4096 00:15:47.914 iodepth=128 00:15:47.914 norandommap=0 00:15:47.914 numjobs=1 00:15:47.914 00:15:47.914 verify_dump=1 00:15:47.914 verify_backlog=512 00:15:47.914 verify_state_save=0 00:15:47.914 do_verify=1 00:15:47.914 verify=crc32c-intel 00:15:47.914 [job0] 00:15:47.914 filename=/dev/nvme0n1 00:15:47.914 [job1] 00:15:47.914 filename=/dev/nvme0n2 00:15:47.914 [job2] 00:15:47.914 filename=/dev/nvme0n3 00:15:47.914 [job3] 00:15:47.914 filename=/dev/nvme0n4 00:15:47.914 Could not set queue depth (nvme0n1) 00:15:47.914 Could not set queue depth (nvme0n2) 00:15:47.914 Could not set queue depth (nvme0n3) 00:15:47.914 Could not set queue depth (nvme0n4) 00:15:48.170 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:48.170 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:48.170 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:48.170 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:48.170 fio-3.35 00:15:48.170 Starting 4 threads 00:15:49.541 00:15:49.541 job0: (groupid=0, jobs=1): err= 0: pid=3373100: Tue Jul 16 01:21:15 2024 00:15:49.541 read: IOPS=4617, BW=18.0MiB/s (18.9MB/s)(18.3MiB/1012msec) 00:15:49.541 slat (nsec): min=1029, max=14148k, avg=83261.70, stdev=679115.71 00:15:49.541 clat (usec): min=404, max=47318, avg=11292.46, stdev=5125.80 00:15:49.541 lat (usec): min=2576, max=47327, avg=11375.73, stdev=5190.85 00:15:49.541 clat percentiles (usec): 00:15:49.541 | 1.00th=[ 3032], 5.00th=[ 5538], 10.00th=[ 7242], 20.00th=[ 8356], 00:15:49.541 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[11207], 00:15:49.541 | 70.00th=[12387], 80.00th=[13566], 90.00th=[16909], 95.00th=[19006], 00:15:49.541 | 99.00th=[34341], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:15:49.541 | 99.99th=[47449] 00:15:49.541 write: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec); 0 zone resets 00:15:49.541 slat (usec): min=2, max=17143, avg=98.26, stdev=628.15 00:15:49.541 clat (usec): min=174, max=48947, avg=14744.27, stdev=10673.25 00:15:49.541 lat (usec): min=187, max=48953, avg=14842.53, stdev=10750.62 00:15:49.541 clat percentiles (usec): 00:15:49.541 | 1.00th=[ 3228], 5.00th=[ 3687], 10.00th=[ 5407], 20.00th=[ 7373], 00:15:49.541 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 9634], 60.00th=[10421], 00:15:49.541 | 70.00th=[17957], 80.00th=[27657], 90.00th=[32637], 95.00th=[35914], 00:15:49.541 | 99.00th=[41681], 99.50th=[45351], 99.90th=[49021], 99.95th=[49021], 00:15:49.541 | 99.99th=[49021] 00:15:49.541 bw ( KiB/s): min=15824, max=24576, per=28.49%, avg=20200.00, stdev=6188.60, samples=2 00:15:49.541 iops : min= 3956, max= 6144, avg=5050.00, stdev=1547.15, samples=2 00:15:49.541 lat (usec) : 250=0.01%, 500=0.08%, 750=0.02% 00:15:49.541 lat (msec) : 2=0.07%, 4=4.55%, 10=49.51%, 20=29.84%, 50=15.91% 00:15:49.541 cpu : usr=3.96%, sys=4.65%, ctx=400, majf=0, minf=1 00:15:49.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:49.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.541 issued rwts: total=4673,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.541 job1: (groupid=0, jobs=1): err= 0: pid=3373115: Tue Jul 16 01:21:15 2024 00:15:49.541 read: IOPS=4799, BW=18.7MiB/s (19.7MB/s)(19.0MiB/1011msec) 00:15:49.541 slat (nsec): min=1233, max=10065k, avg=98795.34, stdev=654183.36 00:15:49.541 clat (usec): min=3518, max=39946, avg=11532.00, stdev=3816.32 00:15:49.541 lat (usec): min=3535, max=39951, avg=11630.79, stdev=3867.91 00:15:49.541 clat percentiles (usec): 00:15:49.541 | 1.00th=[ 4555], 5.00th=[ 7701], 10.00th=[ 8848], 20.00th=[ 9241], 00:15:49.541 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10945], 00:15:49.541 | 70.00th=[12256], 80.00th=[13698], 90.00th=[16057], 95.00th=[18482], 00:15:49.541 | 99.00th=[25822], 99.50th=[32375], 99.90th=[40109], 99.95th=[40109], 00:15:49.541 | 99.99th=[40109] 00:15:49.541 write: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec); 0 zone resets 00:15:49.541 slat (usec): min=2, max=13457, avg=97.18, stdev=466.40 00:15:49.541 clat (usec): min=2144, max=43461, avg=14083.24, stdev=7788.89 00:15:49.541 lat (usec): min=2165, max=43467, avg=14180.42, stdev=7845.44 00:15:49.541 clat percentiles (usec): 00:15:49.541 | 1.00th=[ 2999], 5.00th=[ 5145], 10.00th=[ 7111], 20.00th=[ 9634], 00:15:49.541 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[11731], 00:15:49.541 | 70.00th=[16712], 80.00th=[20317], 90.00th=[25560], 95.00th=[30802], 00:15:49.541 | 99.00th=[39584], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:15:49.541 | 99.99th=[43254] 00:15:49.541 bw ( KiB/s): min=16384, max=24576, per=28.88%, avg=20480.00, stdev=5792.62, samples=2 00:15:49.541 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:15:49.541 lat (msec) : 4=1.31%, 10=37.95%, 20=48.38%, 50=12.36% 00:15:49.541 cpu : usr=3.76%, sys=5.15%, ctx=685, majf=0, minf=1 00:15:49.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:49.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.541 issued rwts: total=4852,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.541 job2: (groupid=0, jobs=1): err= 0: pid=3373135: Tue Jul 16 01:21:15 2024 00:15:49.541 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:15:49.541 slat (nsec): min=1098, max=10595k, avg=100367.02, stdev=711700.25 00:15:49.541 clat (usec): min=4322, max=28566, avg=12541.14, stdev=2676.14 00:15:49.541 lat (usec): min=4328, max=33314, avg=12641.51, stdev=2762.63 00:15:49.541 clat percentiles (usec): 00:15:49.541 | 1.00th=[ 7373], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11076], 00:15:49.541 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:15:49.541 | 70.00th=[12780], 80.00th=[14222], 90.00th=[16188], 95.00th=[17695], 00:15:49.541 | 99.00th=[21890], 99.50th=[22152], 99.90th=[28443], 99.95th=[28443], 00:15:49.541 | 99.99th=[28443] 00:15:49.541 write: IOPS=4267, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1010msec); 0 zone resets 00:15:49.541 slat (nsec): min=1822, max=24712k, avg=132248.95, stdev=821353.59 00:15:49.541 clat (usec): min=2877, max=56686, avg=17752.83, stdev=10591.70 00:15:49.541 lat (usec): min=2887, max=56716, avg=17885.08, stdev=10661.83 00:15:49.541 clat percentiles (usec): 00:15:49.541 | 1.00th=[ 4080], 5.00th=[ 7635], 10.00th=[10159], 20.00th=[10814], 00:15:49.541 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12911], 60.00th=[16581], 00:15:49.541 | 70.00th=[19792], 80.00th=[23725], 90.00th=[36963], 95.00th=[44827], 00:15:49.541 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:15:49.541 | 99.99th=[56886] 00:15:49.541 bw ( KiB/s): min=12536, max=21040, per=23.68%, avg=16788.00, stdev=6013.24, samples=2 00:15:49.541 iops : min= 3134, max= 5260, avg=4197.00, stdev=1503.31, samples=2 00:15:49.541 lat (msec) : 4=0.48%, 10=7.15%, 20=76.02%, 50=16.35%, 100=0.01% 00:15:49.541 cpu : usr=3.77%, sys=3.57%, ctx=493, majf=0, minf=1 00:15:49.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:49.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.541 issued rwts: total=4096,4310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.541 job3: (groupid=0, jobs=1): err= 0: pid=3373139: Tue Jul 16 01:21:15 2024 00:15:49.541 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:15:49.541 slat (nsec): min=1361, max=20971k, avg=135795.55, stdev=957155.91 00:15:49.541 clat (usec): min=8499, max=73794, avg=18172.44, stdev=10600.22 00:15:49.541 lat (usec): min=8502, max=73845, avg=18308.24, stdev=10671.02 00:15:49.541 clat percentiles (usec): 00:15:49.541 | 1.00th=[10421], 5.00th=[10683], 10.00th=[10814], 20.00th=[11731], 00:15:49.541 | 30.00th=[12518], 40.00th=[14222], 50.00th=[15533], 60.00th=[16319], 00:15:49.541 | 70.00th=[17171], 80.00th=[19006], 90.00th=[25822], 95.00th=[45351], 00:15:49.541 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[65274], 00:15:49.541 | 99.99th=[73925] 00:15:49.541 write: IOPS=3375, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1004msec); 0 zone resets 00:15:49.541 slat (usec): min=2, max=13457, avg=164.18, stdev=970.25 00:15:49.541 clat (usec): min=1673, max=68845, avg=21095.13, stdev=14594.37 00:15:49.541 lat (usec): min=2377, max=68858, avg=21259.31, stdev=14686.33 00:15:49.541 clat percentiles (usec): 00:15:49.541 | 1.00th=[ 4359], 5.00th=[ 7832], 10.00th=[10159], 20.00th=[11338], 00:15:49.541 | 30.00th=[12256], 40.00th=[14091], 50.00th=[14615], 60.00th=[15795], 00:15:49.541 | 70.00th=[22938], 80.00th=[32637], 90.00th=[46924], 95.00th=[54789], 00:15:49.541 | 99.00th=[63701], 99.50th=[63701], 99.90th=[68682], 99.95th=[68682], 00:15:49.541 | 99.99th=[68682] 00:15:49.541 bw ( KiB/s): min=12288, max=13800, per=18.40%, avg=13044.00, stdev=1069.15, samples=2 00:15:49.541 iops : min= 3072, max= 3450, avg=3261.00, stdev=267.29, samples=2 00:15:49.541 lat (msec) : 2=0.02%, 4=0.40%, 10=4.72%, 20=70.25%, 50=18.79% 00:15:49.541 lat (msec) : 100=5.82% 00:15:49.541 cpu : usr=2.39%, sys=4.59%, ctx=295, majf=0, minf=1 00:15:49.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:15:49.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:49.541 issued rwts: total=3072,3389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:49.541 00:15:49.541 Run status group 0 (all jobs): 00:15:49.541 READ: bw=64.4MiB/s (67.6MB/s), 12.0MiB/s-18.7MiB/s (12.5MB/s-19.7MB/s), io=65.2MiB (68.4MB), run=1004-1012msec 00:15:49.541 WRITE: bw=69.2MiB/s (72.6MB/s), 13.2MiB/s-19.8MiB/s (13.8MB/s-20.7MB/s), io=70.1MiB (73.5MB), run=1004-1012msec 00:15:49.541 00:15:49.541 Disk stats (read/write): 00:15:49.541 nvme0n1: ios=4262/4608, merge=0/0, ticks=45369/58578, in_queue=103947, util=93.89% 00:15:49.541 nvme0n2: ios=3851/4096, merge=0/0, ticks=43407/61829, in_queue=105236, util=96.45% 00:15:49.541 nvme0n3: ios=3119/3559, merge=0/0, ticks=27613/41142, in_queue=68755, util=94.57% 00:15:49.541 nvme0n4: ios=2605/2801, merge=0/0, ticks=21804/30621, in_queue=52425, util=98.11% 00:15:49.541 01:21:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:49.541 [global] 00:15:49.541 thread=1 00:15:49.541 invalidate=1 00:15:49.541 rw=randwrite 00:15:49.541 time_based=1 00:15:49.541 runtime=1 00:15:49.541 ioengine=libaio 00:15:49.541 direct=1 00:15:49.541 bs=4096 00:15:49.541 iodepth=128 00:15:49.541 norandommap=0 00:15:49.541 numjobs=1 00:15:49.541 00:15:49.541 verify_dump=1 00:15:49.541 verify_backlog=512 00:15:49.541 verify_state_save=0 00:15:49.541 do_verify=1 00:15:49.542 verify=crc32c-intel 00:15:49.542 [job0] 00:15:49.542 filename=/dev/nvme0n1 00:15:49.542 [job1] 00:15:49.542 filename=/dev/nvme0n2 00:15:49.542 [job2] 00:15:49.542 filename=/dev/nvme0n3 00:15:49.542 [job3] 00:15:49.542 filename=/dev/nvme0n4 00:15:49.542 Could not set queue depth (nvme0n1) 00:15:49.542 Could not set queue depth (nvme0n2) 00:15:49.542 Could not set queue depth (nvme0n3) 00:15:49.542 Could not set queue depth (nvme0n4) 00:15:49.798 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:49.798 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:49.798 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:49.798 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:49.798 fio-3.35 00:15:49.798 Starting 4 threads 00:15:51.185 00:15:51.185 job0: (groupid=0, jobs=1): err= 0: pid=3373552: Tue Jul 16 01:21:16 2024 00:15:51.185 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:15:51.185 slat (nsec): min=1277, max=10268k, avg=99865.27, stdev=711467.59 00:15:51.185 clat (usec): min=3988, max=39925, avg=11951.03, stdev=3539.39 00:15:51.185 lat (usec): min=3995, max=39928, avg=12050.90, stdev=3591.12 00:15:51.185 clat percentiles (usec): 00:15:51.185 | 1.00th=[ 4228], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10028], 00:15:51.185 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11469], 60.00th=[11863], 00:15:51.185 | 70.00th=[12125], 80.00th=[13042], 90.00th=[15795], 95.00th=[17695], 00:15:51.185 | 99.00th=[21365], 99.50th=[38011], 99.90th=[39060], 99.95th=[40109], 00:15:51.185 | 99.99th=[40109] 00:15:51.185 write: IOPS=5765, BW=22.5MiB/s (23.6MB/s)(22.7MiB/1007msec); 0 zone resets 00:15:51.185 slat (usec): min=2, max=9381, avg=70.15, stdev=336.10 00:15:51.185 clat (usec): min=748, max=39921, avg=10402.62, stdev=3962.83 00:15:51.185 lat (usec): min=1320, max=39925, avg=10472.77, stdev=3979.41 00:15:51.185 clat percentiles (usec): 00:15:51.185 | 1.00th=[ 2638], 5.00th=[ 4293], 10.00th=[ 5997], 20.00th=[ 8455], 00:15:51.185 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10552], 00:15:51.185 | 70.00th=[11207], 80.00th=[11731], 90.00th=[11994], 95.00th=[16057], 00:15:51.185 | 99.00th=[29754], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:15:51.185 | 99.99th=[40109] 00:15:51.185 bw ( KiB/s): min=20872, max=24560, per=29.18%, avg=22716.00, stdev=2607.81, samples=2 00:15:51.185 iops : min= 5218, max= 6140, avg=5679.00, stdev=651.95, samples=2 00:15:51.185 lat (usec) : 750=0.01% 00:15:51.185 lat (msec) : 2=0.30%, 4=1.87%, 10=25.02%, 20=70.38%, 50=2.42% 00:15:51.185 cpu : usr=3.78%, sys=5.86%, ctx=752, majf=0, minf=1 00:15:51.185 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:51.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.186 issued rwts: total=5632,5806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.186 job1: (groupid=0, jobs=1): err= 0: pid=3373566: Tue Jul 16 01:21:16 2024 00:15:51.186 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:15:51.186 slat (nsec): min=1072, max=26196k, avg=124024.07, stdev=866136.89 00:15:51.186 clat (usec): min=6895, max=61438, avg=15709.12, stdev=7415.26 00:15:51.186 lat (usec): min=6899, max=61465, avg=15833.15, stdev=7475.13 00:15:51.186 clat percentiles (usec): 00:15:51.186 | 1.00th=[ 7308], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11338], 00:15:51.186 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13042], 60.00th=[13698], 00:15:51.186 | 70.00th=[15401], 80.00th=[18220], 90.00th=[25560], 95.00th=[31851], 00:15:51.186 | 99.00th=[45876], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:15:51.186 | 99.99th=[61604] 00:15:51.186 write: IOPS=4190, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1002msec); 0 zone resets 00:15:51.186 slat (nsec): min=1823, max=11741k, avg=111099.98, stdev=731341.14 00:15:51.186 clat (usec): min=1263, max=69807, avg=14829.71, stdev=8197.60 00:15:51.186 lat (usec): min=1274, max=69816, avg=14940.81, stdev=8251.19 00:15:51.186 clat percentiles (usec): 00:15:51.186 | 1.00th=[ 4621], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[10945], 00:15:51.186 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12911], 00:15:51.186 | 70.00th=[15139], 80.00th=[17433], 90.00th=[21365], 95.00th=[32900], 00:15:51.186 | 99.00th=[50594], 99.50th=[56886], 99.90th=[61080], 99.95th=[67634], 00:15:51.186 | 99.99th=[69731] 00:15:51.186 bw ( KiB/s): min=15376, max=17392, per=21.05%, avg=16384.00, stdev=1425.53, samples=2 00:15:51.186 iops : min= 3844, max= 4348, avg=4096.00, stdev=356.38, samples=2 00:15:51.186 lat (msec) : 2=0.11%, 4=0.10%, 10=10.44%, 20=74.54%, 50=14.25% 00:15:51.186 lat (msec) : 100=0.57% 00:15:51.186 cpu : usr=3.30%, sys=4.90%, ctx=303, majf=0, minf=1 00:15:51.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:51.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.186 issued rwts: total=4096,4199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.186 job2: (groupid=0, jobs=1): err= 0: pid=3373583: Tue Jul 16 01:21:16 2024 00:15:51.186 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:15:51.186 slat (nsec): min=1214, max=12588k, avg=108216.87, stdev=754249.36 00:15:51.186 clat (usec): min=2727, max=37123, avg=13743.46, stdev=3546.57 00:15:51.186 lat (usec): min=3081, max=37145, avg=13851.68, stdev=3606.31 00:15:51.186 clat percentiles (usec): 00:15:51.186 | 1.00th=[ 4146], 5.00th=[10159], 10.00th=[10945], 20.00th=[11994], 00:15:51.186 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[13435], 00:15:51.186 | 70.00th=[14222], 80.00th=[14877], 90.00th=[16909], 95.00th=[20317], 00:15:51.186 | 99.00th=[30802], 99.50th=[30802], 99.90th=[32900], 99.95th=[32900], 00:15:51.186 | 99.99th=[36963] 00:15:51.186 write: IOPS=4966, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1004msec); 0 zone resets 00:15:51.186 slat (usec): min=2, max=10927, avg=91.43, stdev=615.48 00:15:51.186 clat (usec): min=2110, max=33607, avg=12816.23, stdev=3822.28 00:15:51.186 lat (usec): min=2121, max=33620, avg=12907.66, stdev=3872.72 00:15:51.186 clat percentiles (usec): 00:15:51.186 | 1.00th=[ 4228], 5.00th=[ 7504], 10.00th=[ 9110], 20.00th=[10683], 00:15:51.186 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12256], 60.00th=[12780], 00:15:51.186 | 70.00th=[13042], 80.00th=[15139], 90.00th=[17695], 95.00th=[20579], 00:15:51.186 | 99.00th=[25297], 99.50th=[27132], 99.90th=[29492], 99.95th=[30802], 00:15:51.186 | 99.99th=[33817] 00:15:51.186 bw ( KiB/s): min=18392, max=20480, per=24.97%, avg=19436.00, stdev=1476.44, samples=2 00:15:51.186 iops : min= 4598, max= 5120, avg=4859.00, stdev=369.11, samples=2 00:15:51.186 lat (msec) : 4=0.84%, 10=9.37%, 20=84.49%, 50=5.29% 00:15:51.186 cpu : usr=3.89%, sys=6.38%, ctx=401, majf=0, minf=1 00:15:51.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:51.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.186 issued rwts: total=4608,4986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.186 job3: (groupid=0, jobs=1): err= 0: pid=3373588: Tue Jul 16 01:21:16 2024 00:15:51.186 read: IOPS=4457, BW=17.4MiB/s (18.3MB/s)(17.4MiB/1002msec) 00:15:51.186 slat (nsec): min=1313, max=14809k, avg=118023.98, stdev=807120.21 00:15:51.186 clat (usec): min=1254, max=42579, avg=14336.16, stdev=4703.91 00:15:51.186 lat (usec): min=6204, max=42596, avg=14454.18, stdev=4773.23 00:15:51.186 clat percentiles (usec): 00:15:51.186 | 1.00th=[ 6783], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[11207], 00:15:51.186 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13042], 60.00th=[13435], 00:15:51.186 | 70.00th=[14222], 80.00th=[16319], 90.00th=[21627], 95.00th=[24249], 00:15:51.186 | 99.00th=[30802], 99.50th=[31065], 99.90th=[33162], 99.95th=[36963], 00:15:51.186 | 99.99th=[42730] 00:15:51.186 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:15:51.186 slat (usec): min=2, max=11301, avg=97.09, stdev=441.17 00:15:51.186 clat (usec): min=7173, max=36169, avg=13613.03, stdev=2789.05 00:15:51.186 lat (usec): min=7182, max=36181, avg=13710.12, stdev=2813.54 00:15:51.186 clat percentiles (usec): 00:15:51.186 | 1.00th=[ 8225], 5.00th=[10552], 10.00th=[11338], 20.00th=[11731], 00:15:51.186 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:15:51.186 | 70.00th=[13566], 80.00th=[16057], 90.00th=[17957], 95.00th=[19530], 00:15:51.186 | 99.00th=[21627], 99.50th=[21627], 99.90th=[24249], 99.95th=[26346], 00:15:51.186 | 99.99th=[35914] 00:15:51.186 bw ( KiB/s): min=16384, max=20480, per=23.68%, avg=18432.00, stdev=2896.31, samples=2 00:15:51.186 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:15:51.186 lat (msec) : 2=0.01%, 10=5.58%, 20=85.09%, 50=9.32% 00:15:51.186 cpu : usr=3.40%, sys=5.00%, ctx=597, majf=0, minf=1 00:15:51.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:51.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.186 issued rwts: total=4466,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.186 00:15:51.186 Run status group 0 (all jobs): 00:15:51.186 READ: bw=72.9MiB/s (76.5MB/s), 16.0MiB/s-21.8MiB/s (16.7MB/s-22.9MB/s), io=73.4MiB (77.0MB), run=1002-1007msec 00:15:51.186 WRITE: bw=76.0MiB/s (79.7MB/s), 16.4MiB/s-22.5MiB/s (17.2MB/s-23.6MB/s), io=76.6MiB (80.3MB), run=1002-1007msec 00:15:51.186 00:15:51.186 Disk stats (read/write): 00:15:51.186 nvme0n1: ios=4645/5007, merge=0/0, ticks=53956/51321, in_queue=105277, util=97.70% 00:15:51.186 nvme0n2: ios=3543/3584, merge=0/0, ticks=25965/23580, in_queue=49545, util=98.07% 00:15:51.186 nvme0n3: ios=4136/4321, merge=0/0, ticks=47233/45002, in_queue=92235, util=97.71% 00:15:51.186 nvme0n4: ios=3642/3951, merge=0/0, ticks=27172/25459, in_queue=52631, util=98.22% 00:15:51.186 01:21:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:51.186 01:21:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3373683 00:15:51.186 01:21:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:51.186 01:21:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:51.186 [global] 00:15:51.186 thread=1 00:15:51.186 invalidate=1 00:15:51.186 rw=read 00:15:51.186 time_based=1 00:15:51.186 runtime=10 00:15:51.186 ioengine=libaio 00:15:51.186 direct=1 00:15:51.186 bs=4096 00:15:51.186 iodepth=1 00:15:51.186 norandommap=1 00:15:51.186 numjobs=1 00:15:51.186 00:15:51.186 [job0] 00:15:51.186 filename=/dev/nvme0n1 00:15:51.186 [job1] 00:15:51.186 filename=/dev/nvme0n2 00:15:51.186 [job2] 00:15:51.186 filename=/dev/nvme0n3 00:15:51.186 [job3] 00:15:51.186 filename=/dev/nvme0n4 00:15:51.186 Could not set queue depth (nvme0n1) 00:15:51.186 Could not set queue depth (nvme0n2) 00:15:51.186 Could not set queue depth (nvme0n3) 00:15:51.186 Could not set queue depth (nvme0n4) 00:15:51.453 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:51.453 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:51.453 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:51.453 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:51.453 fio-3.35 00:15:51.453 Starting 4 threads 00:15:53.978 01:21:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:54.235 01:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:54.235 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3104768, buflen=4096 00:15:54.235 fio: pid=3373988, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:54.492 01:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:54.492 01:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:54.492 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=299008, buflen=4096 00:15:54.492 fio: pid=3373987, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:54.749 01:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:54.749 01:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:54.749 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=311296, buflen=4096 00:15:54.749 fio: pid=3373979, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:54.749 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=39030784, buflen=4096 00:15:54.749 fio: pid=3373986, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:15:54.749 01:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:54.749 01:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:54.749 00:15:54.749 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3373979: Tue Jul 16 01:21:20 2024 00:15:54.749 read: IOPS=24, BW=97.9KiB/s (100kB/s)(304KiB/3106msec) 00:15:54.749 slat (nsec): min=11580, max=65982, avg=23258.45, stdev=6548.88 00:15:54.749 clat (usec): min=460, max=42017, avg=40470.65, stdev=4654.04 00:15:54.749 lat (usec): min=517, max=42041, avg=40493.93, stdev=4650.17 00:15:54.749 clat percentiles (usec): 00:15:54.749 | 1.00th=[ 461], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:54.749 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:54.749 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:54.749 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:54.749 | 99.99th=[42206] 00:15:54.749 bw ( KiB/s): min= 93, max= 104, per=0.77%, avg=98.00, stdev= 4.43, samples=6 00:15:54.749 iops : min= 23, max= 26, avg=24.33, stdev= 1.03, samples=6 00:15:54.749 lat (usec) : 500=1.30% 00:15:54.749 lat (msec) : 50=97.40% 00:15:54.749 cpu : usr=0.13%, sys=0.00%, ctx=79, majf=0, minf=1 00:15:54.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.749 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.749 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.749 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3373986: Tue Jul 16 01:21:20 2024 00:15:54.749 read: IOPS=2905, BW=11.3MiB/s (11.9MB/s)(37.2MiB/3280msec) 00:15:54.749 slat (usec): min=5, max=7161, avg= 8.58, stdev=100.73 00:15:54.749 clat (usec): min=165, max=42034, avg=334.32, stdev=1875.35 00:15:54.749 lat (usec): min=171, max=48024, avg=342.15, stdev=1892.49 00:15:54.749 clat percentiles (usec): 00:15:54.749 | 1.00th=[ 186], 5.00th=[ 204], 10.00th=[ 237], 20.00th=[ 245], 00:15:54.749 | 30.00th=[ 247], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:15:54.749 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 260], 95.00th=[ 265], 00:15:54.749 | 99.00th=[ 281], 99.50th=[ 367], 99.90th=[41157], 99.95th=[41157], 00:15:54.749 | 99.99th=[42206] 00:15:54.749 bw ( KiB/s): min= 113, max=16056, per=99.74%, avg=12694.83, stdev=6232.56, samples=6 00:15:54.749 iops : min= 28, max= 4014, avg=3173.67, stdev=1558.24, samples=6 00:15:54.749 lat (usec) : 250=52.72%, 500=47.00%, 750=0.02%, 1000=0.01% 00:15:54.749 lat (msec) : 2=0.01%, 4=0.02%, 50=0.21% 00:15:54.749 cpu : usr=0.88%, sys=2.84%, ctx=9531, majf=0, minf=1 00:15:54.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.749 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.749 issued rwts: total=9530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.749 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3373987: Tue Jul 16 01:21:20 2024 00:15:54.749 read: IOPS=25, BW=99.6KiB/s (102kB/s)(292KiB/2933msec) 00:15:54.749 slat (nsec): min=8884, max=30625, avg=15105.73, stdev=5713.21 00:15:54.749 clat (usec): min=257, max=41232, avg=39868.45, stdev=6673.96 00:15:54.749 lat (usec): min=267, max=41244, avg=39883.45, stdev=6673.01 00:15:54.749 clat percentiles (usec): 00:15:54.749 | 1.00th=[ 258], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:54.749 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:54.749 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:54.749 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:54.749 | 99.99th=[41157] 00:15:54.749 bw ( KiB/s): min= 96, max= 112, per=0.79%, avg=100.80, stdev= 7.16, samples=5 00:15:54.749 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:15:54.749 lat (usec) : 500=1.35%, 750=1.35% 00:15:54.749 lat (msec) : 50=95.95% 00:15:54.749 cpu : usr=0.10%, sys=0.00%, ctx=75, majf=0, minf=1 00:15:54.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.749 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.749 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.749 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3373988: Tue Jul 16 01:21:20 2024 00:15:54.749 read: IOPS=278, BW=1113KiB/s (1140kB/s)(3032KiB/2724msec) 00:15:54.749 slat (nsec): min=6556, max=30012, avg=8780.20, stdev=4123.82 00:15:54.749 clat (usec): min=195, max=41990, avg=3555.42, stdev=11107.80 00:15:54.749 lat (usec): min=202, max=42011, avg=3564.18, stdev=11111.44 00:15:54.749 clat percentiles (usec): 00:15:54.749 | 1.00th=[ 237], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249], 00:15:54.749 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 258], 60.00th=[ 262], 00:15:54.749 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 408], 95.00th=[41157], 00:15:54.749 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:54.749 | 99.99th=[42206] 00:15:54.749 bw ( KiB/s): min= 96, max= 2224, per=4.11%, avg=523.20, stdev=950.78, samples=5 00:15:54.749 iops : min= 24, max= 556, avg=130.80, stdev=237.70, samples=5 00:15:54.749 lat (usec) : 250=25.30%, 500=66.40% 00:15:54.749 lat (msec) : 2=0.13%, 50=8.04% 00:15:54.749 cpu : usr=0.04%, sys=0.33%, ctx=760, majf=0, minf=2 00:15:54.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.749 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.749 issued rwts: total=759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.749 00:15:54.749 Run status group 0 (all jobs): 00:15:54.749 READ: bw=12.4MiB/s (13.0MB/s), 97.9KiB/s-11.3MiB/s (100kB/s-11.9MB/s), io=40.8MiB (42.7MB), run=2724-3280msec 00:15:54.749 00:15:54.749 Disk stats (read/write): 00:15:54.749 nvme0n1: ios=77/0, merge=0/0, ticks=3086/0, in_queue=3086, util=95.13% 00:15:54.749 nvme0n2: ios=9524/0, merge=0/0, ticks=2937/0, in_queue=2937, util=95.91% 00:15:54.749 nvme0n3: ios=116/0, merge=0/0, ticks=3098/0, in_queue=3098, util=99.09% 00:15:54.749 nvme0n4: ios=722/0, merge=0/0, ticks=3026/0, in_queue=3026, util=98.85% 00:15:55.007 01:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:55.007 01:21:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:55.265 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:55.265 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:55.265 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:55.265 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:55.534 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:55.534 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3373683 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:55.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:55.819 nvmf hotplug test: fio failed as expected 00:15:55.819 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.091 rmmod nvme_tcp 00:15:56.091 rmmod nvme_fabrics 00:15:56.091 rmmod nvme_keyring 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3370539 ']' 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3370539 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3370539 ']' 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3370539 00:15:56.091 01:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:56.092 01:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.092 01:21:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3370539 00:15:56.092 01:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:56.092 01:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:56.092 01:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3370539' 00:15:56.092 killing process with pid 3370539 00:15:56.092 01:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3370539 00:15:56.092 01:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3370539 00:15:56.354 01:21:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.354 01:21:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.354 01:21:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.354 01:21:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.354 01:21:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.354 01:21:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.354 01:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.354 01:21:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.884 01:21:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:58.884 00:15:58.884 real 0m26.089s 00:15:58.884 user 1m46.177s 00:15:58.884 sys 0m7.539s 00:15:58.884 01:21:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:58.884 01:21:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.884 ************************************ 00:15:58.884 END TEST nvmf_fio_target 00:15:58.884 ************************************ 00:15:58.884 01:21:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:58.884 01:21:24 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:58.884 01:21:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:58.884 01:21:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.884 01:21:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:58.884 ************************************ 00:15:58.884 START TEST nvmf_bdevio 00:15:58.884 ************************************ 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:58.884 * Looking for test storage... 00:15:58.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.884 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:58.885 01:21:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:04.142 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:04.142 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:04.142 Found net devices under 0000:86:00.0: cvl_0_0 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:04.142 Found net devices under 0000:86:00.1: cvl_0_1 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:04.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:16:04.142 00:16:04.142 --- 10.0.0.2 ping statistics --- 00:16:04.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.142 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:16:04.142 00:16:04.142 --- 10.0.0.1 ping statistics --- 00:16:04.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.142 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3378218 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3378218 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3378218 ']' 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.142 01:21:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:04.142 [2024-07-16 01:21:29.950944] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:16:04.142 [2024-07-16 01:21:29.950987] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.142 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.142 [2024-07-16 01:21:30.008903] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:04.142 [2024-07-16 01:21:30.092689] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.142 [2024-07-16 01:21:30.092725] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.143 [2024-07-16 01:21:30.092732] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.143 [2024-07-16 01:21:30.092738] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.143 [2024-07-16 01:21:30.092743] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.143 [2024-07-16 01:21:30.092851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:04.143 [2024-07-16 01:21:30.092974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:04.143 [2024-07-16 01:21:30.093080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.143 [2024-07-16 01:21:30.093081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:05.071 [2024-07-16 01:21:30.799151] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:05.071 Malloc0 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:05.071 [2024-07-16 01:21:30.850590] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:05.071 { 00:16:05.071 "params": { 00:16:05.071 "name": "Nvme$subsystem", 00:16:05.071 "trtype": "$TEST_TRANSPORT", 00:16:05.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:05.071 "adrfam": "ipv4", 00:16:05.071 "trsvcid": "$NVMF_PORT", 00:16:05.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:05.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:05.071 "hdgst": ${hdgst:-false}, 00:16:05.071 "ddgst": ${ddgst:-false} 00:16:05.071 }, 00:16:05.071 "method": "bdev_nvme_attach_controller" 00:16:05.071 } 00:16:05.071 EOF 00:16:05.071 )") 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:05.071 01:21:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:05.071 "params": { 00:16:05.071 "name": "Nvme1", 00:16:05.071 "trtype": "tcp", 00:16:05.071 "traddr": "10.0.0.2", 00:16:05.071 "adrfam": "ipv4", 00:16:05.071 "trsvcid": "4420", 00:16:05.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:05.071 "hdgst": false, 00:16:05.071 "ddgst": false 00:16:05.071 }, 00:16:05.071 "method": "bdev_nvme_attach_controller" 00:16:05.071 }' 00:16:05.071 [2024-07-16 01:21:30.900745] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:16:05.071 [2024-07-16 01:21:30.900790] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3378257 ] 00:16:05.071 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.071 [2024-07-16 01:21:30.957246] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:05.071 [2024-07-16 01:21:31.032673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.071 [2024-07-16 01:21:31.032769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.071 [2024-07-16 01:21:31.032771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.327 I/O targets: 00:16:05.327 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:05.327 00:16:05.327 00:16:05.327 CUnit - A unit testing framework for C - Version 2.1-3 00:16:05.327 http://cunit.sourceforge.net/ 00:16:05.327 00:16:05.327 00:16:05.327 Suite: bdevio tests on: Nvme1n1 00:16:05.327 Test: blockdev write read block ...passed 00:16:05.327 Test: blockdev write zeroes read block ...passed 00:16:05.327 Test: blockdev write zeroes read no split ...passed 00:16:05.327 Test: blockdev write zeroes read split ...passed 00:16:05.583 Test: blockdev write zeroes read split partial ...passed 00:16:05.583 Test: blockdev reset ...[2024-07-16 01:21:31.343018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:05.583 [2024-07-16 01:21:31.343085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e678c0 (9): Bad file descriptor 00:16:05.583 [2024-07-16 01:21:31.398364] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:05.583 passed 00:16:05.583 Test: blockdev write read 8 blocks ...passed 00:16:05.583 Test: blockdev write read size > 128k ...passed 00:16:05.583 Test: blockdev write read invalid size ...passed 00:16:05.583 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:05.583 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:05.583 Test: blockdev write read max offset ...passed 00:16:05.583 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:05.583 Test: blockdev writev readv 8 blocks ...passed 00:16:05.583 Test: blockdev writev readv 30 x 1block ...passed 00:16:05.838 Test: blockdev writev readv block ...passed 00:16:05.838 Test: blockdev writev readv size > 128k ...passed 00:16:05.838 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:05.838 Test: blockdev comparev and writev ...[2024-07-16 01:21:31.607979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.838 [2024-07-16 01:21:31.608007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:05.838 [2024-07-16 01:21:31.608020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.838 [2024-07-16 01:21:31.608027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:05.838 [2024-07-16 01:21:31.608243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.838 [2024-07-16 01:21:31.608253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:05.838 [2024-07-16 01:21:31.608264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.838 [2024-07-16 01:21:31.608271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:05.838 [2024-07-16 01:21:31.608513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.838 [2024-07-16 01:21:31.608523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:05.838 [2024-07-16 01:21:31.608534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.838 [2024-07-16 01:21:31.608540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:05.838 [2024-07-16 01:21:31.608770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.838 [2024-07-16 01:21:31.608779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:05.838 [2024-07-16 01:21:31.608794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:05.838 [2024-07-16 01:21:31.608801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:05.838 passed 00:16:05.838 Test: blockdev nvme passthru rw ...passed 00:16:05.838 Test: blockdev nvme passthru vendor specific ...[2024-07-16 01:21:31.690637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.839 [2024-07-16 01:21:31.690652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:05.839 [2024-07-16 01:21:31.690756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.839 [2024-07-16 01:21:31.690766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:05.839 [2024-07-16 01:21:31.690871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.839 [2024-07-16 01:21:31.690880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:05.839 [2024-07-16 01:21:31.690984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:05.839 [2024-07-16 01:21:31.690992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:05.839 passed 00:16:05.839 Test: blockdev nvme admin passthru ...passed 00:16:05.839 Test: blockdev copy ...passed 00:16:05.839 00:16:05.839 Run Summary: Type Total Ran Passed Failed Inactive 00:16:05.839 suites 1 1 n/a 0 0 00:16:05.839 tests 23 23 23 0 0 00:16:05.839 asserts 152 152 152 0 n/a 00:16:05.839 00:16:05.839 Elapsed time = 1.113 seconds 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:06.094 rmmod nvme_tcp 00:16:06.094 rmmod nvme_fabrics 00:16:06.094 rmmod nvme_keyring 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3378218 ']' 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3378218 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3378218 ']' 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3378218 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:06.094 01:21:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3378218 00:16:06.094 01:21:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:06.094 01:21:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:06.094 01:21:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3378218' 00:16:06.094 killing process with pid 3378218 00:16:06.094 01:21:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3378218 00:16:06.094 01:21:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3378218 00:16:06.349 01:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:06.349 01:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:06.349 01:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:06.349 01:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.349 01:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.349 01:21:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.349 01:21:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.349 01:21:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.872 01:21:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:08.872 00:16:08.872 real 0m9.942s 00:16:08.872 user 0m12.016s 00:16:08.872 sys 0m4.617s 00:16:08.872 01:21:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.872 01:21:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:08.872 ************************************ 00:16:08.872 END TEST nvmf_bdevio 00:16:08.872 ************************************ 00:16:08.872 01:21:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:08.872 01:21:34 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:08.872 01:21:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:08.872 01:21:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.872 01:21:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:08.872 ************************************ 00:16:08.872 START TEST nvmf_auth_target 00:16:08.872 ************************************ 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:08.872 * Looking for test storage... 00:16:08.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.872 01:21:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:08.873 01:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:13.050 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:13.050 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:13.050 Found net devices under 0000:86:00.0: cvl_0_0 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:13.050 Found net devices under 0000:86:00.1: cvl_0_1 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:13.050 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:13.051 01:21:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:13.051 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:13.051 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:13.051 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:13.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:16:13.051 00:16:13.051 --- 10.0.0.2 ping statistics --- 00:16:13.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.051 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:16:13.051 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:13.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:16:13.308 00:16:13.308 --- 10.0.0.1 ping statistics --- 00:16:13.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.308 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3381771 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3381771 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3381771 ']' 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.308 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:13.309 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.309 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3382016 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:14.240 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:14.241 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ed0ced3c857f408790d04d58602d86acb7952e1bdfda9afc 00:16:14.241 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:14.241 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.I3t 00:16:14.241 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ed0ced3c857f408790d04d58602d86acb7952e1bdfda9afc 0 00:16:14.241 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ed0ced3c857f408790d04d58602d86acb7952e1bdfda9afc 0 00:16:14.241 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.241 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.241 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ed0ced3c857f408790d04d58602d86acb7952e1bdfda9afc 00:16:14.241 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:14.241 01:21:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.I3t 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.I3t 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.I3t 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5fc3c50373464c977bc18ae5de28284bfd47b8e84695a4ed67dd7cdf5c24d5f9 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7BO 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5fc3c50373464c977bc18ae5de28284bfd47b8e84695a4ed67dd7cdf5c24d5f9 3 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5fc3c50373464c977bc18ae5de28284bfd47b8e84695a4ed67dd7cdf5c24d5f9 3 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5fc3c50373464c977bc18ae5de28284bfd47b8e84695a4ed67dd7cdf5c24d5f9 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7BO 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7BO 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.7BO 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9327a92805cb812cc92a7ea034f9e8d5 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.mOy 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9327a92805cb812cc92a7ea034f9e8d5 1 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9327a92805cb812cc92a7ea034f9e8d5 1 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9327a92805cb812cc92a7ea034f9e8d5 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.mOy 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.mOy 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.mOy 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dd8036dbb87e5de2dd7a64268799fa5e7742e970101b6943 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.aO3 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dd8036dbb87e5de2dd7a64268799fa5e7742e970101b6943 2 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dd8036dbb87e5de2dd7a64268799fa5e7742e970101b6943 2 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dd8036dbb87e5de2dd7a64268799fa5e7742e970101b6943 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.aO3 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.aO3 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.aO3 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=83ff27f3dbb0434a1743a0829291103ec5be2bb4a677f6a6 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.S1F 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 83ff27f3dbb0434a1743a0829291103ec5be2bb4a677f6a6 2 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 83ff27f3dbb0434a1743a0829291103ec5be2bb4a677f6a6 2 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=83ff27f3dbb0434a1743a0829291103ec5be2bb4a677f6a6 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:14.241 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:14.499 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.S1F 00:16:14.499 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.S1F 00:16:14.499 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.S1F 00:16:14.499 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:14.499 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:14.499 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.499 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:14.499 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:14.499 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:14.499 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:14.499 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dbd5c9bae0f05298f6c59acd4c64c593 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2x2 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dbd5c9bae0f05298f6c59acd4c64c593 1 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dbd5c9bae0f05298f6c59acd4c64c593 1 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dbd5c9bae0f05298f6c59acd4c64c593 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2x2 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2x2 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.2x2 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=55fa34311b33103d07152bb741a91d13559a73f665d2cba3a50d4b5481b166ba 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Atf 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 55fa34311b33103d07152bb741a91d13559a73f665d2cba3a50d4b5481b166ba 3 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 55fa34311b33103d07152bb741a91d13559a73f665d2cba3a50d4b5481b166ba 3 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=55fa34311b33103d07152bb741a91d13559a73f665d2cba3a50d4b5481b166ba 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Atf 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Atf 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Atf 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3381771 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3381771 ']' 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.500 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3382016 /var/tmp/host.sock 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3382016 ']' 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:14.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.I3t 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.I3t 00:16:14.758 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.I3t 00:16:15.015 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.7BO ]] 00:16:15.015 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7BO 00:16:15.015 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.015 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.015 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.015 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7BO 00:16:15.015 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7BO 00:16:15.272 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:15.272 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.mOy 00:16:15.272 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.272 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.272 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.272 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.mOy 00:16:15.272 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.mOy 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.aO3 ]] 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aO3 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aO3 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aO3 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.S1F 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.S1F 00:16:15.529 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.S1F 00:16:15.787 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.2x2 ]] 00:16:15.787 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2x2 00:16:15.787 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.787 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.787 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.787 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2x2 00:16:15.787 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2x2 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Atf 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Atf 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Atf 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:16.044 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.302 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.560 00:16:16.560 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.560 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.560 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.560 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.560 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.560 01:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.560 01:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.560 01:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.560 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.561 { 00:16:16.561 "cntlid": 1, 00:16:16.561 "qid": 0, 00:16:16.561 "state": "enabled", 00:16:16.561 "thread": "nvmf_tgt_poll_group_000", 00:16:16.561 "listen_address": { 00:16:16.561 "trtype": "TCP", 00:16:16.561 "adrfam": "IPv4", 00:16:16.561 "traddr": "10.0.0.2", 00:16:16.561 "trsvcid": "4420" 00:16:16.561 }, 00:16:16.561 "peer_address": { 00:16:16.561 "trtype": "TCP", 00:16:16.561 "adrfam": "IPv4", 00:16:16.561 "traddr": "10.0.0.1", 00:16:16.561 "trsvcid": "40624" 00:16:16.561 }, 00:16:16.561 "auth": { 00:16:16.561 "state": "completed", 00:16:16.561 "digest": "sha256", 00:16:16.561 "dhgroup": "null" 00:16:16.561 } 00:16:16.561 } 00:16:16.561 ]' 00:16:16.817 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.817 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.817 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.817 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:16.817 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.817 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.817 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.817 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.074 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.637 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.638 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.894 00:16:17.894 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.894 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.894 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.151 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.151 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.151 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.151 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.151 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.151 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.151 { 00:16:18.151 "cntlid": 3, 00:16:18.151 "qid": 0, 00:16:18.151 "state": "enabled", 00:16:18.151 "thread": "nvmf_tgt_poll_group_000", 00:16:18.151 "listen_address": { 00:16:18.151 "trtype": "TCP", 00:16:18.151 "adrfam": "IPv4", 00:16:18.151 "traddr": "10.0.0.2", 00:16:18.151 "trsvcid": "4420" 00:16:18.151 }, 00:16:18.151 "peer_address": { 00:16:18.151 "trtype": "TCP", 00:16:18.152 "adrfam": "IPv4", 00:16:18.152 "traddr": "10.0.0.1", 00:16:18.152 "trsvcid": "56082" 00:16:18.152 }, 00:16:18.152 "auth": { 00:16:18.152 "state": "completed", 00:16:18.152 "digest": "sha256", 00:16:18.152 "dhgroup": "null" 00:16:18.152 } 00:16:18.152 } 00:16:18.152 ]' 00:16:18.152 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.152 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.152 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.152 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:18.152 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.152 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.152 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.152 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.408 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:16:18.973 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.973 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:18.973 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.973 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.973 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.973 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.973 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.973 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.231 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.488 00:16:19.488 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.488 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.488 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.488 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.488 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.488 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.488 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.488 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.488 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.488 { 00:16:19.488 "cntlid": 5, 00:16:19.488 "qid": 0, 00:16:19.488 "state": "enabled", 00:16:19.488 "thread": "nvmf_tgt_poll_group_000", 00:16:19.488 "listen_address": { 00:16:19.488 "trtype": "TCP", 00:16:19.488 "adrfam": "IPv4", 00:16:19.488 "traddr": "10.0.0.2", 00:16:19.488 "trsvcid": "4420" 00:16:19.488 }, 00:16:19.488 "peer_address": { 00:16:19.488 "trtype": "TCP", 00:16:19.488 "adrfam": "IPv4", 00:16:19.488 "traddr": "10.0.0.1", 00:16:19.488 "trsvcid": "56114" 00:16:19.488 }, 00:16:19.488 "auth": { 00:16:19.488 "state": "completed", 00:16:19.488 "digest": "sha256", 00:16:19.488 "dhgroup": "null" 00:16:19.488 } 00:16:19.488 } 00:16:19.488 ]' 00:16:19.488 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.744 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.744 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.744 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:19.744 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.744 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.744 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.744 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.744 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:16:20.306 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.306 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:20.306 01:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.306 01:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.563 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.819 00:16:20.819 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.819 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.819 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.076 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.076 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.076 01:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.076 01:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.076 01:21:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.076 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.076 { 00:16:21.076 "cntlid": 7, 00:16:21.076 "qid": 0, 00:16:21.076 "state": "enabled", 00:16:21.076 "thread": "nvmf_tgt_poll_group_000", 00:16:21.076 "listen_address": { 00:16:21.076 "trtype": "TCP", 00:16:21.076 "adrfam": "IPv4", 00:16:21.076 "traddr": "10.0.0.2", 00:16:21.076 "trsvcid": "4420" 00:16:21.076 }, 00:16:21.076 "peer_address": { 00:16:21.076 "trtype": "TCP", 00:16:21.076 "adrfam": "IPv4", 00:16:21.076 "traddr": "10.0.0.1", 00:16:21.076 "trsvcid": "56132" 00:16:21.076 }, 00:16:21.076 "auth": { 00:16:21.076 "state": "completed", 00:16:21.076 "digest": "sha256", 00:16:21.076 "dhgroup": "null" 00:16:21.076 } 00:16:21.076 } 00:16:21.076 ]' 00:16:21.076 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.076 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.076 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.076 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:21.076 01:21:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.076 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.076 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.076 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.332 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:16:21.895 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.895 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:21.895 01:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.895 01:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.895 01:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.895 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.895 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.895 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:21.895 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.152 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.409 00:16:22.409 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.409 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.409 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.409 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.409 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.409 01:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.409 01:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.409 01:21:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.409 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.409 { 00:16:22.409 "cntlid": 9, 00:16:22.409 "qid": 0, 00:16:22.409 "state": "enabled", 00:16:22.409 "thread": "nvmf_tgt_poll_group_000", 00:16:22.409 "listen_address": { 00:16:22.409 "trtype": "TCP", 00:16:22.409 "adrfam": "IPv4", 00:16:22.409 "traddr": "10.0.0.2", 00:16:22.409 "trsvcid": "4420" 00:16:22.409 }, 00:16:22.409 "peer_address": { 00:16:22.409 "trtype": "TCP", 00:16:22.409 "adrfam": "IPv4", 00:16:22.409 "traddr": "10.0.0.1", 00:16:22.409 "trsvcid": "56156" 00:16:22.409 }, 00:16:22.409 "auth": { 00:16:22.409 "state": "completed", 00:16:22.409 "digest": "sha256", 00:16:22.409 "dhgroup": "ffdhe2048" 00:16:22.409 } 00:16:22.409 } 00:16:22.409 ]' 00:16:22.409 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.666 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.666 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.666 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.666 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.666 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.666 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.666 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.923 01:21:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.486 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.776 00:16:23.776 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.776 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.776 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.038 { 00:16:24.038 "cntlid": 11, 00:16:24.038 "qid": 0, 00:16:24.038 "state": "enabled", 00:16:24.038 "thread": "nvmf_tgt_poll_group_000", 00:16:24.038 "listen_address": { 00:16:24.038 "trtype": "TCP", 00:16:24.038 "adrfam": "IPv4", 00:16:24.038 "traddr": "10.0.0.2", 00:16:24.038 "trsvcid": "4420" 00:16:24.038 }, 00:16:24.038 "peer_address": { 00:16:24.038 "trtype": "TCP", 00:16:24.038 "adrfam": "IPv4", 00:16:24.038 "traddr": "10.0.0.1", 00:16:24.038 "trsvcid": "56184" 00:16:24.038 }, 00:16:24.038 "auth": { 00:16:24.038 "state": "completed", 00:16:24.038 "digest": "sha256", 00:16:24.038 "dhgroup": "ffdhe2048" 00:16:24.038 } 00:16:24.038 } 00:16:24.038 ]' 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.038 01:21:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.295 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:16:24.858 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.858 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.858 01:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.858 01:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.858 01:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.858 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.858 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.858 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.115 01:21:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.372 00:16:25.372 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.372 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.372 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.372 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.372 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.372 01:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.372 01:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.372 01:21:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.372 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.372 { 00:16:25.372 "cntlid": 13, 00:16:25.372 "qid": 0, 00:16:25.372 "state": "enabled", 00:16:25.372 "thread": "nvmf_tgt_poll_group_000", 00:16:25.372 "listen_address": { 00:16:25.372 "trtype": "TCP", 00:16:25.372 "adrfam": "IPv4", 00:16:25.372 "traddr": "10.0.0.2", 00:16:25.372 "trsvcid": "4420" 00:16:25.372 }, 00:16:25.372 "peer_address": { 00:16:25.372 "trtype": "TCP", 00:16:25.372 "adrfam": "IPv4", 00:16:25.372 "traddr": "10.0.0.1", 00:16:25.372 "trsvcid": "56198" 00:16:25.372 }, 00:16:25.372 "auth": { 00:16:25.372 "state": "completed", 00:16:25.372 "digest": "sha256", 00:16:25.372 "dhgroup": "ffdhe2048" 00:16:25.372 } 00:16:25.372 } 00:16:25.372 ]' 00:16:25.372 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.629 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.629 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.629 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.629 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.629 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.629 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.629 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.885 01:21:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.446 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:26.447 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:26.447 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.447 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:26.447 01:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.447 01:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.447 01:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.447 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.447 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.703 00:16:26.703 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.703 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.703 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.958 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.958 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.958 01:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.958 01:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.958 01:21:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.958 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.958 { 00:16:26.958 "cntlid": 15, 00:16:26.958 "qid": 0, 00:16:26.958 "state": "enabled", 00:16:26.958 "thread": "nvmf_tgt_poll_group_000", 00:16:26.958 "listen_address": { 00:16:26.958 "trtype": "TCP", 00:16:26.958 "adrfam": "IPv4", 00:16:26.958 "traddr": "10.0.0.2", 00:16:26.959 "trsvcid": "4420" 00:16:26.959 }, 00:16:26.959 "peer_address": { 00:16:26.959 "trtype": "TCP", 00:16:26.959 "adrfam": "IPv4", 00:16:26.959 "traddr": "10.0.0.1", 00:16:26.959 "trsvcid": "56238" 00:16:26.959 }, 00:16:26.959 "auth": { 00:16:26.959 "state": "completed", 00:16:26.959 "digest": "sha256", 00:16:26.959 "dhgroup": "ffdhe2048" 00:16:26.959 } 00:16:26.959 } 00:16:26.959 ]' 00:16:26.959 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.959 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.959 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.959 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:26.959 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.959 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.959 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.959 01:21:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.214 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:16:27.777 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.777 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.777 01:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.777 01:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.777 01:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.777 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.777 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.777 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.777 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.035 01:21:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.292 00:16:28.292 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.292 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.292 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.292 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.292 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.292 01:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.292 01:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.292 01:21:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.292 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.292 { 00:16:28.292 "cntlid": 17, 00:16:28.292 "qid": 0, 00:16:28.292 "state": "enabled", 00:16:28.292 "thread": "nvmf_tgt_poll_group_000", 00:16:28.292 "listen_address": { 00:16:28.292 "trtype": "TCP", 00:16:28.292 "adrfam": "IPv4", 00:16:28.292 "traddr": "10.0.0.2", 00:16:28.292 "trsvcid": "4420" 00:16:28.292 }, 00:16:28.292 "peer_address": { 00:16:28.292 "trtype": "TCP", 00:16:28.292 "adrfam": "IPv4", 00:16:28.292 "traddr": "10.0.0.1", 00:16:28.292 "trsvcid": "35520" 00:16:28.292 }, 00:16:28.292 "auth": { 00:16:28.292 "state": "completed", 00:16:28.292 "digest": "sha256", 00:16:28.292 "dhgroup": "ffdhe3072" 00:16:28.292 } 00:16:28.292 } 00:16:28.292 ]' 00:16:28.292 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.550 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.550 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.550 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.550 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.550 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.550 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.550 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.550 01:21:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:16:29.115 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.115 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.115 01:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.115 01:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.115 01:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.372 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.630 00:16:29.630 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.630 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.630 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.887 { 00:16:29.887 "cntlid": 19, 00:16:29.887 "qid": 0, 00:16:29.887 "state": "enabled", 00:16:29.887 "thread": "nvmf_tgt_poll_group_000", 00:16:29.887 "listen_address": { 00:16:29.887 "trtype": "TCP", 00:16:29.887 "adrfam": "IPv4", 00:16:29.887 "traddr": "10.0.0.2", 00:16:29.887 "trsvcid": "4420" 00:16:29.887 }, 00:16:29.887 "peer_address": { 00:16:29.887 "trtype": "TCP", 00:16:29.887 "adrfam": "IPv4", 00:16:29.887 "traddr": "10.0.0.1", 00:16:29.887 "trsvcid": "35566" 00:16:29.887 }, 00:16:29.887 "auth": { 00:16:29.887 "state": "completed", 00:16:29.887 "digest": "sha256", 00:16:29.887 "dhgroup": "ffdhe3072" 00:16:29.887 } 00:16:29.887 } 00:16:29.887 ]' 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.887 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.145 01:21:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:16:30.709 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.709 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.709 01:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.709 01:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.709 01:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.709 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.709 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.709 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.967 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.226 00:16:31.226 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.226 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.226 01:21:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.226 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.226 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.226 01:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.226 01:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.226 01:21:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.226 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.226 { 00:16:31.226 "cntlid": 21, 00:16:31.226 "qid": 0, 00:16:31.226 "state": "enabled", 00:16:31.226 "thread": "nvmf_tgt_poll_group_000", 00:16:31.226 "listen_address": { 00:16:31.226 "trtype": "TCP", 00:16:31.226 "adrfam": "IPv4", 00:16:31.226 "traddr": "10.0.0.2", 00:16:31.226 "trsvcid": "4420" 00:16:31.226 }, 00:16:31.226 "peer_address": { 00:16:31.226 "trtype": "TCP", 00:16:31.226 "adrfam": "IPv4", 00:16:31.226 "traddr": "10.0.0.1", 00:16:31.226 "trsvcid": "35588" 00:16:31.226 }, 00:16:31.226 "auth": { 00:16:31.226 "state": "completed", 00:16:31.226 "digest": "sha256", 00:16:31.226 "dhgroup": "ffdhe3072" 00:16:31.226 } 00:16:31.226 } 00:16:31.226 ]' 00:16:31.226 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.226 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.226 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.483 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.484 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.484 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.484 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.484 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.484 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:16:32.049 01:21:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.049 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.049 01:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.049 01:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.049 01:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.049 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.049 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.049 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.307 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.564 00:16:32.564 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.564 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.564 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.822 { 00:16:32.822 "cntlid": 23, 00:16:32.822 "qid": 0, 00:16:32.822 "state": "enabled", 00:16:32.822 "thread": "nvmf_tgt_poll_group_000", 00:16:32.822 "listen_address": { 00:16:32.822 "trtype": "TCP", 00:16:32.822 "adrfam": "IPv4", 00:16:32.822 "traddr": "10.0.0.2", 00:16:32.822 "trsvcid": "4420" 00:16:32.822 }, 00:16:32.822 "peer_address": { 00:16:32.822 "trtype": "TCP", 00:16:32.822 "adrfam": "IPv4", 00:16:32.822 "traddr": "10.0.0.1", 00:16:32.822 "trsvcid": "35612" 00:16:32.822 }, 00:16:32.822 "auth": { 00:16:32.822 "state": "completed", 00:16:32.822 "digest": "sha256", 00:16:32.822 "dhgroup": "ffdhe3072" 00:16:32.822 } 00:16:32.822 } 00:16:32.822 ]' 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.822 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.080 01:21:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:16:33.645 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.645 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.645 01:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.645 01:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.645 01:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.645 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.645 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.645 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.645 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.903 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.160 00:16:34.160 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.160 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.160 01:21:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.417 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.417 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.417 01:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.417 01:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.417 01:22:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.417 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.417 { 00:16:34.417 "cntlid": 25, 00:16:34.417 "qid": 0, 00:16:34.417 "state": "enabled", 00:16:34.417 "thread": "nvmf_tgt_poll_group_000", 00:16:34.417 "listen_address": { 00:16:34.417 "trtype": "TCP", 00:16:34.417 "adrfam": "IPv4", 00:16:34.417 "traddr": "10.0.0.2", 00:16:34.417 "trsvcid": "4420" 00:16:34.418 }, 00:16:34.418 "peer_address": { 00:16:34.418 "trtype": "TCP", 00:16:34.418 "adrfam": "IPv4", 00:16:34.418 "traddr": "10.0.0.1", 00:16:34.418 "trsvcid": "35634" 00:16:34.418 }, 00:16:34.418 "auth": { 00:16:34.418 "state": "completed", 00:16:34.418 "digest": "sha256", 00:16:34.418 "dhgroup": "ffdhe4096" 00:16:34.418 } 00:16:34.418 } 00:16:34.418 ]' 00:16:34.418 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.418 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.418 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.418 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.418 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.418 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.418 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.418 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.675 01:22:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.240 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.496 00:16:35.496 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.496 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.496 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.751 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.751 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.751 01:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.751 01:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.751 01:22:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.751 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.751 { 00:16:35.751 "cntlid": 27, 00:16:35.751 "qid": 0, 00:16:35.751 "state": "enabled", 00:16:35.751 "thread": "nvmf_tgt_poll_group_000", 00:16:35.751 "listen_address": { 00:16:35.751 "trtype": "TCP", 00:16:35.751 "adrfam": "IPv4", 00:16:35.751 "traddr": "10.0.0.2", 00:16:35.751 "trsvcid": "4420" 00:16:35.751 }, 00:16:35.751 "peer_address": { 00:16:35.751 "trtype": "TCP", 00:16:35.751 "adrfam": "IPv4", 00:16:35.751 "traddr": "10.0.0.1", 00:16:35.751 "trsvcid": "35670" 00:16:35.751 }, 00:16:35.751 "auth": { 00:16:35.751 "state": "completed", 00:16:35.751 "digest": "sha256", 00:16:35.751 "dhgroup": "ffdhe4096" 00:16:35.751 } 00:16:35.751 } 00:16:35.751 ]' 00:16:35.751 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.751 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.751 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.751 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.751 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.007 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.007 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.007 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.007 01:22:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:16:36.569 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.569 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:36.569 01:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.569 01:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.569 01:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.569 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.569 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.569 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.825 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.081 00:16:37.081 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.081 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.081 01:22:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.338 { 00:16:37.338 "cntlid": 29, 00:16:37.338 "qid": 0, 00:16:37.338 "state": "enabled", 00:16:37.338 "thread": "nvmf_tgt_poll_group_000", 00:16:37.338 "listen_address": { 00:16:37.338 "trtype": "TCP", 00:16:37.338 "adrfam": "IPv4", 00:16:37.338 "traddr": "10.0.0.2", 00:16:37.338 "trsvcid": "4420" 00:16:37.338 }, 00:16:37.338 "peer_address": { 00:16:37.338 "trtype": "TCP", 00:16:37.338 "adrfam": "IPv4", 00:16:37.338 "traddr": "10.0.0.1", 00:16:37.338 "trsvcid": "35698" 00:16:37.338 }, 00:16:37.338 "auth": { 00:16:37.338 "state": "completed", 00:16:37.338 "digest": "sha256", 00:16:37.338 "dhgroup": "ffdhe4096" 00:16:37.338 } 00:16:37.338 } 00:16:37.338 ]' 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.338 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.598 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:16:38.160 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.160 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.160 01:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.160 01:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.160 01:22:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.160 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.160 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.160 01:22:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.416 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.673 00:16:38.673 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.673 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.673 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.673 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.673 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.673 01:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.673 01:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.673 01:22:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.673 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.673 { 00:16:38.673 "cntlid": 31, 00:16:38.673 "qid": 0, 00:16:38.673 "state": "enabled", 00:16:38.673 "thread": "nvmf_tgt_poll_group_000", 00:16:38.673 "listen_address": { 00:16:38.673 "trtype": "TCP", 00:16:38.673 "adrfam": "IPv4", 00:16:38.673 "traddr": "10.0.0.2", 00:16:38.673 "trsvcid": "4420" 00:16:38.673 }, 00:16:38.673 "peer_address": { 00:16:38.673 "trtype": "TCP", 00:16:38.673 "adrfam": "IPv4", 00:16:38.673 "traddr": "10.0.0.1", 00:16:38.673 "trsvcid": "56100" 00:16:38.673 }, 00:16:38.673 "auth": { 00:16:38.673 "state": "completed", 00:16:38.673 "digest": "sha256", 00:16:38.673 "dhgroup": "ffdhe4096" 00:16:38.673 } 00:16:38.673 } 00:16:38.673 ]' 00:16:38.673 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.929 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.929 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.929 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.929 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.929 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.929 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.929 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.929 01:22:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:16:39.492 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.492 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:39.492 01:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.492 01:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.492 01:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.492 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.492 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.492 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.492 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.748 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.004 00:16:40.004 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.004 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.004 01:22:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.260 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.260 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.260 01:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.260 01:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.260 01:22:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.260 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.260 { 00:16:40.260 "cntlid": 33, 00:16:40.260 "qid": 0, 00:16:40.260 "state": "enabled", 00:16:40.260 "thread": "nvmf_tgt_poll_group_000", 00:16:40.260 "listen_address": { 00:16:40.260 "trtype": "TCP", 00:16:40.260 "adrfam": "IPv4", 00:16:40.260 "traddr": "10.0.0.2", 00:16:40.260 "trsvcid": "4420" 00:16:40.260 }, 00:16:40.260 "peer_address": { 00:16:40.260 "trtype": "TCP", 00:16:40.260 "adrfam": "IPv4", 00:16:40.260 "traddr": "10.0.0.1", 00:16:40.260 "trsvcid": "56134" 00:16:40.260 }, 00:16:40.260 "auth": { 00:16:40.260 "state": "completed", 00:16:40.260 "digest": "sha256", 00:16:40.260 "dhgroup": "ffdhe6144" 00:16:40.260 } 00:16:40.260 } 00:16:40.260 ]' 00:16:40.260 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.260 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.260 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.516 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.516 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.516 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.516 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.516 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.516 01:22:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:16:41.078 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.078 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.078 01:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.078 01:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.078 01:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.078 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.078 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.078 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.335 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.592 00:16:41.592 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.592 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.592 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.849 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.849 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.849 01:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.849 01:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.849 01:22:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.849 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.849 { 00:16:41.849 "cntlid": 35, 00:16:41.849 "qid": 0, 00:16:41.849 "state": "enabled", 00:16:41.849 "thread": "nvmf_tgt_poll_group_000", 00:16:41.849 "listen_address": { 00:16:41.849 "trtype": "TCP", 00:16:41.849 "adrfam": "IPv4", 00:16:41.849 "traddr": "10.0.0.2", 00:16:41.849 "trsvcid": "4420" 00:16:41.849 }, 00:16:41.849 "peer_address": { 00:16:41.849 "trtype": "TCP", 00:16:41.849 "adrfam": "IPv4", 00:16:41.849 "traddr": "10.0.0.1", 00:16:41.849 "trsvcid": "56160" 00:16:41.849 }, 00:16:41.849 "auth": { 00:16:41.849 "state": "completed", 00:16:41.849 "digest": "sha256", 00:16:41.849 "dhgroup": "ffdhe6144" 00:16:41.849 } 00:16:41.849 } 00:16:41.849 ]' 00:16:41.849 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.849 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.849 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.849 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.849 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.106 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.106 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.106 01:22:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.106 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:16:42.668 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.668 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:42.668 01:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.668 01:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.668 01:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.668 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.668 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:42.668 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.924 01:22:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.180 00:16:43.180 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.180 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.180 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.435 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.435 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.435 01:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.435 01:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.435 01:22:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.435 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.435 { 00:16:43.435 "cntlid": 37, 00:16:43.435 "qid": 0, 00:16:43.435 "state": "enabled", 00:16:43.435 "thread": "nvmf_tgt_poll_group_000", 00:16:43.435 "listen_address": { 00:16:43.435 "trtype": "TCP", 00:16:43.435 "adrfam": "IPv4", 00:16:43.435 "traddr": "10.0.0.2", 00:16:43.435 "trsvcid": "4420" 00:16:43.435 }, 00:16:43.436 "peer_address": { 00:16:43.436 "trtype": "TCP", 00:16:43.436 "adrfam": "IPv4", 00:16:43.436 "traddr": "10.0.0.1", 00:16:43.436 "trsvcid": "56196" 00:16:43.436 }, 00:16:43.436 "auth": { 00:16:43.436 "state": "completed", 00:16:43.436 "digest": "sha256", 00:16:43.436 "dhgroup": "ffdhe6144" 00:16:43.436 } 00:16:43.436 } 00:16:43.436 ]' 00:16:43.436 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.436 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.436 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.436 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.436 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.692 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.692 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.692 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.692 01:22:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:16:44.268 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.268 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.268 01:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.268 01:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.268 01:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.268 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.268 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.268 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.524 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.780 00:16:44.780 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.780 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.780 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.126 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.126 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.126 01:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.126 01:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.126 01:22:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.126 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.126 { 00:16:45.126 "cntlid": 39, 00:16:45.126 "qid": 0, 00:16:45.126 "state": "enabled", 00:16:45.126 "thread": "nvmf_tgt_poll_group_000", 00:16:45.126 "listen_address": { 00:16:45.126 "trtype": "TCP", 00:16:45.126 "adrfam": "IPv4", 00:16:45.126 "traddr": "10.0.0.2", 00:16:45.126 "trsvcid": "4420" 00:16:45.126 }, 00:16:45.126 "peer_address": { 00:16:45.126 "trtype": "TCP", 00:16:45.126 "adrfam": "IPv4", 00:16:45.126 "traddr": "10.0.0.1", 00:16:45.126 "trsvcid": "56224" 00:16:45.126 }, 00:16:45.126 "auth": { 00:16:45.126 "state": "completed", 00:16:45.126 "digest": "sha256", 00:16:45.126 "dhgroup": "ffdhe6144" 00:16:45.126 } 00:16:45.126 } 00:16:45.126 ]' 00:16:45.126 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.126 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.126 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.126 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.126 01:22:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.126 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.126 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.126 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.382 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.948 01:22:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.513 00:16:46.513 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.513 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.513 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.771 { 00:16:46.771 "cntlid": 41, 00:16:46.771 "qid": 0, 00:16:46.771 "state": "enabled", 00:16:46.771 "thread": "nvmf_tgt_poll_group_000", 00:16:46.771 "listen_address": { 00:16:46.771 "trtype": "TCP", 00:16:46.771 "adrfam": "IPv4", 00:16:46.771 "traddr": "10.0.0.2", 00:16:46.771 "trsvcid": "4420" 00:16:46.771 }, 00:16:46.771 "peer_address": { 00:16:46.771 "trtype": "TCP", 00:16:46.771 "adrfam": "IPv4", 00:16:46.771 "traddr": "10.0.0.1", 00:16:46.771 "trsvcid": "56256" 00:16:46.771 }, 00:16:46.771 "auth": { 00:16:46.771 "state": "completed", 00:16:46.771 "digest": "sha256", 00:16:46.771 "dhgroup": "ffdhe8192" 00:16:46.771 } 00:16:46.771 } 00:16:46.771 ]' 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.771 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.029 01:22:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:16:47.613 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.614 01:22:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.180 00:16:48.180 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.180 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.180 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.437 { 00:16:48.437 "cntlid": 43, 00:16:48.437 "qid": 0, 00:16:48.437 "state": "enabled", 00:16:48.437 "thread": "nvmf_tgt_poll_group_000", 00:16:48.437 "listen_address": { 00:16:48.437 "trtype": "TCP", 00:16:48.437 "adrfam": "IPv4", 00:16:48.437 "traddr": "10.0.0.2", 00:16:48.437 "trsvcid": "4420" 00:16:48.437 }, 00:16:48.437 "peer_address": { 00:16:48.437 "trtype": "TCP", 00:16:48.437 "adrfam": "IPv4", 00:16:48.437 "traddr": "10.0.0.1", 00:16:48.437 "trsvcid": "45074" 00:16:48.437 }, 00:16:48.437 "auth": { 00:16:48.437 "state": "completed", 00:16:48.437 "digest": "sha256", 00:16:48.437 "dhgroup": "ffdhe8192" 00:16:48.437 } 00:16:48.437 } 00:16:48.437 ]' 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.437 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.695 01:22:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:16:49.262 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.262 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.262 01:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.262 01:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.262 01:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.262 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.262 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.262 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.520 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.779 00:16:49.779 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.779 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.779 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.038 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.038 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.038 01:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.038 01:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.038 01:22:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.038 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.038 { 00:16:50.038 "cntlid": 45, 00:16:50.038 "qid": 0, 00:16:50.038 "state": "enabled", 00:16:50.038 "thread": "nvmf_tgt_poll_group_000", 00:16:50.038 "listen_address": { 00:16:50.038 "trtype": "TCP", 00:16:50.038 "adrfam": "IPv4", 00:16:50.038 "traddr": "10.0.0.2", 00:16:50.038 "trsvcid": "4420" 00:16:50.038 }, 00:16:50.038 "peer_address": { 00:16:50.038 "trtype": "TCP", 00:16:50.038 "adrfam": "IPv4", 00:16:50.038 "traddr": "10.0.0.1", 00:16:50.038 "trsvcid": "45112" 00:16:50.038 }, 00:16:50.038 "auth": { 00:16:50.038 "state": "completed", 00:16:50.038 "digest": "sha256", 00:16:50.038 "dhgroup": "ffdhe8192" 00:16:50.038 } 00:16:50.038 } 00:16:50.038 ]' 00:16:50.038 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.038 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.038 01:22:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.038 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.038 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.296 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.296 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.296 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.296 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:16:50.863 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.863 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.863 01:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.863 01:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.863 01:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.863 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.863 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.863 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.121 01:22:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:51.688 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.688 { 00:16:51.688 "cntlid": 47, 00:16:51.688 "qid": 0, 00:16:51.688 "state": "enabled", 00:16:51.688 "thread": "nvmf_tgt_poll_group_000", 00:16:51.688 "listen_address": { 00:16:51.688 "trtype": "TCP", 00:16:51.688 "adrfam": "IPv4", 00:16:51.688 "traddr": "10.0.0.2", 00:16:51.688 "trsvcid": "4420" 00:16:51.688 }, 00:16:51.688 "peer_address": { 00:16:51.688 "trtype": "TCP", 00:16:51.688 "adrfam": "IPv4", 00:16:51.688 "traddr": "10.0.0.1", 00:16:51.688 "trsvcid": "45134" 00:16:51.688 }, 00:16:51.688 "auth": { 00:16:51.688 "state": "completed", 00:16:51.688 "digest": "sha256", 00:16:51.688 "dhgroup": "ffdhe8192" 00:16:51.688 } 00:16:51.688 } 00:16:51.688 ]' 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.688 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.945 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.945 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.945 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.945 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.945 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.945 01:22:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:16:52.511 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.511 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:52.511 01:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.511 01:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.511 01:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.511 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:52.511 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.511 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.511 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.511 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.770 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.028 00:16:53.028 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.028 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.028 01:22:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.286 { 00:16:53.286 "cntlid": 49, 00:16:53.286 "qid": 0, 00:16:53.286 "state": "enabled", 00:16:53.286 "thread": "nvmf_tgt_poll_group_000", 00:16:53.286 "listen_address": { 00:16:53.286 "trtype": "TCP", 00:16:53.286 "adrfam": "IPv4", 00:16:53.286 "traddr": "10.0.0.2", 00:16:53.286 "trsvcid": "4420" 00:16:53.286 }, 00:16:53.286 "peer_address": { 00:16:53.286 "trtype": "TCP", 00:16:53.286 "adrfam": "IPv4", 00:16:53.286 "traddr": "10.0.0.1", 00:16:53.286 "trsvcid": "45154" 00:16:53.286 }, 00:16:53.286 "auth": { 00:16:53.286 "state": "completed", 00:16:53.286 "digest": "sha384", 00:16:53.286 "dhgroup": "null" 00:16:53.286 } 00:16:53.286 } 00:16:53.286 ]' 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.286 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.544 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:16:54.110 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.110 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:54.110 01:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.110 01:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.110 01:22:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.110 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.110 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.110 01:22:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.368 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.368 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.627 { 00:16:54.627 "cntlid": 51, 00:16:54.627 "qid": 0, 00:16:54.627 "state": "enabled", 00:16:54.627 "thread": "nvmf_tgt_poll_group_000", 00:16:54.627 "listen_address": { 00:16:54.627 "trtype": "TCP", 00:16:54.627 "adrfam": "IPv4", 00:16:54.627 "traddr": "10.0.0.2", 00:16:54.627 "trsvcid": "4420" 00:16:54.627 }, 00:16:54.627 "peer_address": { 00:16:54.627 "trtype": "TCP", 00:16:54.627 "adrfam": "IPv4", 00:16:54.627 "traddr": "10.0.0.1", 00:16:54.627 "trsvcid": "45176" 00:16:54.627 }, 00:16:54.627 "auth": { 00:16:54.627 "state": "completed", 00:16:54.627 "digest": "sha384", 00:16:54.627 "dhgroup": "null" 00:16:54.627 } 00:16:54.627 } 00:16:54.627 ]' 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.627 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.885 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:54.885 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.885 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.885 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.885 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.885 01:22:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:16:55.452 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.452 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.452 01:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.452 01:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.452 01:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.452 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.452 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:55.452 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.710 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.968 00:16:55.968 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.968 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.968 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.225 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.225 01:22:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.226 01:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.226 01:22:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.226 01:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.226 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.226 { 00:16:56.226 "cntlid": 53, 00:16:56.226 "qid": 0, 00:16:56.226 "state": "enabled", 00:16:56.226 "thread": "nvmf_tgt_poll_group_000", 00:16:56.226 "listen_address": { 00:16:56.226 "trtype": "TCP", 00:16:56.226 "adrfam": "IPv4", 00:16:56.226 "traddr": "10.0.0.2", 00:16:56.226 "trsvcid": "4420" 00:16:56.226 }, 00:16:56.226 "peer_address": { 00:16:56.226 "trtype": "TCP", 00:16:56.226 "adrfam": "IPv4", 00:16:56.226 "traddr": "10.0.0.1", 00:16:56.226 "trsvcid": "45210" 00:16:56.226 }, 00:16:56.226 "auth": { 00:16:56.226 "state": "completed", 00:16:56.226 "digest": "sha384", 00:16:56.226 "dhgroup": "null" 00:16:56.226 } 00:16:56.226 } 00:16:56.226 ]' 00:16:56.226 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.226 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.226 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.226 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:56.226 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.226 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.226 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.226 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.484 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:16:57.049 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.049 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.049 01:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.049 01:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.049 01:22:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.049 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.049 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.049 01:22:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.049 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.306 00:16:57.306 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.306 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.306 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.563 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.563 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.564 01:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.564 01:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.564 01:22:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.564 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.564 { 00:16:57.564 "cntlid": 55, 00:16:57.564 "qid": 0, 00:16:57.564 "state": "enabled", 00:16:57.564 "thread": "nvmf_tgt_poll_group_000", 00:16:57.564 "listen_address": { 00:16:57.564 "trtype": "TCP", 00:16:57.564 "adrfam": "IPv4", 00:16:57.564 "traddr": "10.0.0.2", 00:16:57.564 "trsvcid": "4420" 00:16:57.564 }, 00:16:57.564 "peer_address": { 00:16:57.564 "trtype": "TCP", 00:16:57.564 "adrfam": "IPv4", 00:16:57.564 "traddr": "10.0.0.1", 00:16:57.564 "trsvcid": "45240" 00:16:57.564 }, 00:16:57.564 "auth": { 00:16:57.564 "state": "completed", 00:16:57.564 "digest": "sha384", 00:16:57.564 "dhgroup": "null" 00:16:57.564 } 00:16:57.564 } 00:16:57.564 ]' 00:16:57.564 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.564 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.564 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.564 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:57.564 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.821 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.821 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.821 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.821 01:22:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:16:58.384 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.384 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.384 01:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.384 01:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.384 01:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.384 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.384 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.384 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.384 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:58.642 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:58.642 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.642 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:58.642 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:58.642 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:58.642 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.642 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.642 01:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.642 01:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.642 01:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.642 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.643 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.899 00:16:58.899 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.899 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.899 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.900 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.900 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.900 01:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.900 01:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.900 01:22:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.900 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.900 { 00:16:58.900 "cntlid": 57, 00:16:58.900 "qid": 0, 00:16:58.900 "state": "enabled", 00:16:58.900 "thread": "nvmf_tgt_poll_group_000", 00:16:58.900 "listen_address": { 00:16:58.900 "trtype": "TCP", 00:16:58.900 "adrfam": "IPv4", 00:16:58.900 "traddr": "10.0.0.2", 00:16:58.900 "trsvcid": "4420" 00:16:58.900 }, 00:16:58.900 "peer_address": { 00:16:58.900 "trtype": "TCP", 00:16:58.900 "adrfam": "IPv4", 00:16:58.900 "traddr": "10.0.0.1", 00:16:58.900 "trsvcid": "37582" 00:16:58.900 }, 00:16:58.900 "auth": { 00:16:58.900 "state": "completed", 00:16:58.900 "digest": "sha384", 00:16:58.900 "dhgroup": "ffdhe2048" 00:16:58.900 } 00:16:58.900 } 00:16:58.900 ]' 00:16:59.170 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.170 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.170 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.170 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.170 01:22:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.170 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.170 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.170 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.426 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.991 01:22:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.249 00:17:00.249 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.249 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.249 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.507 { 00:17:00.507 "cntlid": 59, 00:17:00.507 "qid": 0, 00:17:00.507 "state": "enabled", 00:17:00.507 "thread": "nvmf_tgt_poll_group_000", 00:17:00.507 "listen_address": { 00:17:00.507 "trtype": "TCP", 00:17:00.507 "adrfam": "IPv4", 00:17:00.507 "traddr": "10.0.0.2", 00:17:00.507 "trsvcid": "4420" 00:17:00.507 }, 00:17:00.507 "peer_address": { 00:17:00.507 "trtype": "TCP", 00:17:00.507 "adrfam": "IPv4", 00:17:00.507 "traddr": "10.0.0.1", 00:17:00.507 "trsvcid": "37606" 00:17:00.507 }, 00:17:00.507 "auth": { 00:17:00.507 "state": "completed", 00:17:00.507 "digest": "sha384", 00:17:00.507 "dhgroup": "ffdhe2048" 00:17:00.507 } 00:17:00.507 } 00:17:00.507 ]' 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.507 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.764 01:22:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:17:01.330 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.330 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:01.330 01:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.330 01:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.330 01:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.330 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.330 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:01.330 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.588 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.846 00:17:01.846 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.846 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.846 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.846 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.846 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.846 01:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.846 01:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.846 01:22:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.846 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.846 { 00:17:01.846 "cntlid": 61, 00:17:01.846 "qid": 0, 00:17:01.846 "state": "enabled", 00:17:01.846 "thread": "nvmf_tgt_poll_group_000", 00:17:01.846 "listen_address": { 00:17:01.846 "trtype": "TCP", 00:17:01.846 "adrfam": "IPv4", 00:17:01.846 "traddr": "10.0.0.2", 00:17:01.846 "trsvcid": "4420" 00:17:01.846 }, 00:17:01.846 "peer_address": { 00:17:01.846 "trtype": "TCP", 00:17:01.846 "adrfam": "IPv4", 00:17:01.846 "traddr": "10.0.0.1", 00:17:01.846 "trsvcid": "37644" 00:17:01.846 }, 00:17:01.846 "auth": { 00:17:01.846 "state": "completed", 00:17:01.846 "digest": "sha384", 00:17:01.846 "dhgroup": "ffdhe2048" 00:17:01.846 } 00:17:01.846 } 00:17:01.846 ]' 00:17:01.846 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.104 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.104 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.104 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:02.104 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.104 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.104 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.104 01:22:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.362 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.928 01:22:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.187 00:17:03.187 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.187 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.187 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.446 { 00:17:03.446 "cntlid": 63, 00:17:03.446 "qid": 0, 00:17:03.446 "state": "enabled", 00:17:03.446 "thread": "nvmf_tgt_poll_group_000", 00:17:03.446 "listen_address": { 00:17:03.446 "trtype": "TCP", 00:17:03.446 "adrfam": "IPv4", 00:17:03.446 "traddr": "10.0.0.2", 00:17:03.446 "trsvcid": "4420" 00:17:03.446 }, 00:17:03.446 "peer_address": { 00:17:03.446 "trtype": "TCP", 00:17:03.446 "adrfam": "IPv4", 00:17:03.446 "traddr": "10.0.0.1", 00:17:03.446 "trsvcid": "37672" 00:17:03.446 }, 00:17:03.446 "auth": { 00:17:03.446 "state": "completed", 00:17:03.446 "digest": "sha384", 00:17:03.446 "dhgroup": "ffdhe2048" 00:17:03.446 } 00:17:03.446 } 00:17:03.446 ]' 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.446 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.704 01:22:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:17:04.296 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.296 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:04.296 01:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.296 01:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.296 01:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.296 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.296 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.296 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.296 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.554 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.812 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.812 { 00:17:04.812 "cntlid": 65, 00:17:04.812 "qid": 0, 00:17:04.812 "state": "enabled", 00:17:04.812 "thread": "nvmf_tgt_poll_group_000", 00:17:04.812 "listen_address": { 00:17:04.812 "trtype": "TCP", 00:17:04.812 "adrfam": "IPv4", 00:17:04.812 "traddr": "10.0.0.2", 00:17:04.812 "trsvcid": "4420" 00:17:04.812 }, 00:17:04.812 "peer_address": { 00:17:04.812 "trtype": "TCP", 00:17:04.812 "adrfam": "IPv4", 00:17:04.812 "traddr": "10.0.0.1", 00:17:04.812 "trsvcid": "37712" 00:17:04.812 }, 00:17:04.812 "auth": { 00:17:04.812 "state": "completed", 00:17:04.812 "digest": "sha384", 00:17:04.812 "dhgroup": "ffdhe3072" 00:17:04.812 } 00:17:04.812 } 00:17:04.812 ]' 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.812 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.069 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:05.069 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.069 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.069 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.069 01:22:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.069 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:17:05.634 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.634 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.634 01:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.634 01:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.634 01:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.634 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.634 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:05.634 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.892 01:22:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.171 00:17:06.171 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.171 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.171 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.427 { 00:17:06.427 "cntlid": 67, 00:17:06.427 "qid": 0, 00:17:06.427 "state": "enabled", 00:17:06.427 "thread": "nvmf_tgt_poll_group_000", 00:17:06.427 "listen_address": { 00:17:06.427 "trtype": "TCP", 00:17:06.427 "adrfam": "IPv4", 00:17:06.427 "traddr": "10.0.0.2", 00:17:06.427 "trsvcid": "4420" 00:17:06.427 }, 00:17:06.427 "peer_address": { 00:17:06.427 "trtype": "TCP", 00:17:06.427 "adrfam": "IPv4", 00:17:06.427 "traddr": "10.0.0.1", 00:17:06.427 "trsvcid": "37734" 00:17:06.427 }, 00:17:06.427 "auth": { 00:17:06.427 "state": "completed", 00:17:06.427 "digest": "sha384", 00:17:06.427 "dhgroup": "ffdhe3072" 00:17:06.427 } 00:17:06.427 } 00:17:06.427 ]' 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.427 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.684 01:22:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:17:07.246 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.246 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:07.246 01:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.246 01:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.246 01:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.246 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.246 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.246 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.246 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:07.247 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.247 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:07.247 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:07.247 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:07.247 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.247 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.247 01:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.247 01:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.247 01:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.247 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.247 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.504 00:17:07.504 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.504 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.504 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.759 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.759 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.759 01:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.759 01:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.759 01:22:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.759 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.759 { 00:17:07.759 "cntlid": 69, 00:17:07.759 "qid": 0, 00:17:07.759 "state": "enabled", 00:17:07.759 "thread": "nvmf_tgt_poll_group_000", 00:17:07.759 "listen_address": { 00:17:07.759 "trtype": "TCP", 00:17:07.759 "adrfam": "IPv4", 00:17:07.759 "traddr": "10.0.0.2", 00:17:07.759 "trsvcid": "4420" 00:17:07.759 }, 00:17:07.759 "peer_address": { 00:17:07.759 "trtype": "TCP", 00:17:07.759 "adrfam": "IPv4", 00:17:07.759 "traddr": "10.0.0.1", 00:17:07.759 "trsvcid": "37756" 00:17:07.759 }, 00:17:07.759 "auth": { 00:17:07.759 "state": "completed", 00:17:07.759 "digest": "sha384", 00:17:07.759 "dhgroup": "ffdhe3072" 00:17:07.759 } 00:17:07.759 } 00:17:07.759 ]' 00:17:07.759 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.759 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.759 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.759 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:07.759 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.015 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.015 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.015 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.015 01:22:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:17:08.578 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.578 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.578 01:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.578 01:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.578 01:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.578 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.578 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.578 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.836 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.093 00:17:09.093 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.093 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.093 01:22:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.351 { 00:17:09.351 "cntlid": 71, 00:17:09.351 "qid": 0, 00:17:09.351 "state": "enabled", 00:17:09.351 "thread": "nvmf_tgt_poll_group_000", 00:17:09.351 "listen_address": { 00:17:09.351 "trtype": "TCP", 00:17:09.351 "adrfam": "IPv4", 00:17:09.351 "traddr": "10.0.0.2", 00:17:09.351 "trsvcid": "4420" 00:17:09.351 }, 00:17:09.351 "peer_address": { 00:17:09.351 "trtype": "TCP", 00:17:09.351 "adrfam": "IPv4", 00:17:09.351 "traddr": "10.0.0.1", 00:17:09.351 "trsvcid": "44432" 00:17:09.351 }, 00:17:09.351 "auth": { 00:17:09.351 "state": "completed", 00:17:09.351 "digest": "sha384", 00:17:09.351 "dhgroup": "ffdhe3072" 00:17:09.351 } 00:17:09.351 } 00:17:09.351 ]' 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.351 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.610 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:17:10.174 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.174 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.174 01:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.174 01:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.174 01:22:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.174 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.175 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.175 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.175 01:22:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.175 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.432 00:17:10.433 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.433 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.433 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.690 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.690 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.690 01:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.690 01:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.690 01:22:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.690 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.690 { 00:17:10.690 "cntlid": 73, 00:17:10.690 "qid": 0, 00:17:10.690 "state": "enabled", 00:17:10.690 "thread": "nvmf_tgt_poll_group_000", 00:17:10.690 "listen_address": { 00:17:10.690 "trtype": "TCP", 00:17:10.690 "adrfam": "IPv4", 00:17:10.690 "traddr": "10.0.0.2", 00:17:10.690 "trsvcid": "4420" 00:17:10.690 }, 00:17:10.690 "peer_address": { 00:17:10.690 "trtype": "TCP", 00:17:10.690 "adrfam": "IPv4", 00:17:10.690 "traddr": "10.0.0.1", 00:17:10.690 "trsvcid": "44456" 00:17:10.690 }, 00:17:10.690 "auth": { 00:17:10.690 "state": "completed", 00:17:10.690 "digest": "sha384", 00:17:10.690 "dhgroup": "ffdhe4096" 00:17:10.690 } 00:17:10.690 } 00:17:10.690 ]' 00:17:10.690 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.690 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.690 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.690 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.690 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.947 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.947 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.947 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.947 01:22:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:17:11.513 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.513 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.513 01:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.513 01:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.513 01:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.513 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.513 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.513 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.770 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.028 00:17:12.028 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.028 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.028 01:22:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.285 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.285 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.285 01:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.285 01:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.285 01:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.285 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.285 { 00:17:12.285 "cntlid": 75, 00:17:12.285 "qid": 0, 00:17:12.285 "state": "enabled", 00:17:12.285 "thread": "nvmf_tgt_poll_group_000", 00:17:12.286 "listen_address": { 00:17:12.286 "trtype": "TCP", 00:17:12.286 "adrfam": "IPv4", 00:17:12.286 "traddr": "10.0.0.2", 00:17:12.286 "trsvcid": "4420" 00:17:12.286 }, 00:17:12.286 "peer_address": { 00:17:12.286 "trtype": "TCP", 00:17:12.286 "adrfam": "IPv4", 00:17:12.286 "traddr": "10.0.0.1", 00:17:12.286 "trsvcid": "44480" 00:17:12.286 }, 00:17:12.286 "auth": { 00:17:12.286 "state": "completed", 00:17:12.286 "digest": "sha384", 00:17:12.286 "dhgroup": "ffdhe4096" 00:17:12.286 } 00:17:12.286 } 00:17:12.286 ]' 00:17:12.286 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.286 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.286 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.286 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.286 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.286 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.286 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.286 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.543 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:17:13.107 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.107 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.107 01:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.107 01:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.107 01:22:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.107 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.107 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.107 01:22:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.107 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:13.107 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.107 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.107 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:13.107 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:13.107 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.107 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.107 01:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.107 01:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.365 01:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.365 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.365 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.365 00:17:13.622 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.622 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.622 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.622 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.622 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.623 01:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.623 01:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.623 01:22:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.623 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.623 { 00:17:13.623 "cntlid": 77, 00:17:13.623 "qid": 0, 00:17:13.623 "state": "enabled", 00:17:13.623 "thread": "nvmf_tgt_poll_group_000", 00:17:13.623 "listen_address": { 00:17:13.623 "trtype": "TCP", 00:17:13.623 "adrfam": "IPv4", 00:17:13.623 "traddr": "10.0.0.2", 00:17:13.623 "trsvcid": "4420" 00:17:13.623 }, 00:17:13.623 "peer_address": { 00:17:13.623 "trtype": "TCP", 00:17:13.623 "adrfam": "IPv4", 00:17:13.623 "traddr": "10.0.0.1", 00:17:13.623 "trsvcid": "44512" 00:17:13.623 }, 00:17:13.623 "auth": { 00:17:13.623 "state": "completed", 00:17:13.623 "digest": "sha384", 00:17:13.623 "dhgroup": "ffdhe4096" 00:17:13.623 } 00:17:13.623 } 00:17:13.623 ]' 00:17:13.623 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.623 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.623 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.881 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.881 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.881 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.881 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.881 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.881 01:22:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:17:14.445 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.445 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:14.445 01:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.445 01:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.445 01:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.445 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.445 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:14.445 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:14.701 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:14.701 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.701 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:14.701 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:14.701 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:14.701 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.701 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:14.701 01:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.702 01:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.702 01:22:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.702 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.702 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.029 00:17:15.029 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.029 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.029 01:22:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.292 { 00:17:15.292 "cntlid": 79, 00:17:15.292 "qid": 0, 00:17:15.292 "state": "enabled", 00:17:15.292 "thread": "nvmf_tgt_poll_group_000", 00:17:15.292 "listen_address": { 00:17:15.292 "trtype": "TCP", 00:17:15.292 "adrfam": "IPv4", 00:17:15.292 "traddr": "10.0.0.2", 00:17:15.292 "trsvcid": "4420" 00:17:15.292 }, 00:17:15.292 "peer_address": { 00:17:15.292 "trtype": "TCP", 00:17:15.292 "adrfam": "IPv4", 00:17:15.292 "traddr": "10.0.0.1", 00:17:15.292 "trsvcid": "44536" 00:17:15.292 }, 00:17:15.292 "auth": { 00:17:15.292 "state": "completed", 00:17:15.292 "digest": "sha384", 00:17:15.292 "dhgroup": "ffdhe4096" 00:17:15.292 } 00:17:15.292 } 00:17:15.292 ]' 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.292 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.550 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:17:16.116 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.116 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.116 01:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.116 01:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.116 01:22:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.116 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.116 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.116 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.116 01:22:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.116 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.682 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.682 { 00:17:16.682 "cntlid": 81, 00:17:16.682 "qid": 0, 00:17:16.682 "state": "enabled", 00:17:16.682 "thread": "nvmf_tgt_poll_group_000", 00:17:16.682 "listen_address": { 00:17:16.682 "trtype": "TCP", 00:17:16.682 "adrfam": "IPv4", 00:17:16.682 "traddr": "10.0.0.2", 00:17:16.682 "trsvcid": "4420" 00:17:16.682 }, 00:17:16.682 "peer_address": { 00:17:16.682 "trtype": "TCP", 00:17:16.682 "adrfam": "IPv4", 00:17:16.682 "traddr": "10.0.0.1", 00:17:16.682 "trsvcid": "44580" 00:17:16.682 }, 00:17:16.682 "auth": { 00:17:16.682 "state": "completed", 00:17:16.682 "digest": "sha384", 00:17:16.682 "dhgroup": "ffdhe6144" 00:17:16.682 } 00:17:16.682 } 00:17:16.682 ]' 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.682 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.940 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.940 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.940 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.940 01:22:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:17:17.506 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.506 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:17.506 01:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.506 01:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.506 01:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.506 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.506 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.506 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.765 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.023 00:17:18.023 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.023 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.023 01:22:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.280 { 00:17:18.280 "cntlid": 83, 00:17:18.280 "qid": 0, 00:17:18.280 "state": "enabled", 00:17:18.280 "thread": "nvmf_tgt_poll_group_000", 00:17:18.280 "listen_address": { 00:17:18.280 "trtype": "TCP", 00:17:18.280 "adrfam": "IPv4", 00:17:18.280 "traddr": "10.0.0.2", 00:17:18.280 "trsvcid": "4420" 00:17:18.280 }, 00:17:18.280 "peer_address": { 00:17:18.280 "trtype": "TCP", 00:17:18.280 "adrfam": "IPv4", 00:17:18.280 "traddr": "10.0.0.1", 00:17:18.280 "trsvcid": "59458" 00:17:18.280 }, 00:17:18.280 "auth": { 00:17:18.280 "state": "completed", 00:17:18.280 "digest": "sha384", 00:17:18.280 "dhgroup": "ffdhe6144" 00:17:18.280 } 00:17:18.280 } 00:17:18.280 ]' 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.280 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.538 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:17:19.105 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.105 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.105 01:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.105 01:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.105 01:22:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.105 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.105 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.105 01:22:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.363 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.621 00:17:19.621 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.621 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.621 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.879 { 00:17:19.879 "cntlid": 85, 00:17:19.879 "qid": 0, 00:17:19.879 "state": "enabled", 00:17:19.879 "thread": "nvmf_tgt_poll_group_000", 00:17:19.879 "listen_address": { 00:17:19.879 "trtype": "TCP", 00:17:19.879 "adrfam": "IPv4", 00:17:19.879 "traddr": "10.0.0.2", 00:17:19.879 "trsvcid": "4420" 00:17:19.879 }, 00:17:19.879 "peer_address": { 00:17:19.879 "trtype": "TCP", 00:17:19.879 "adrfam": "IPv4", 00:17:19.879 "traddr": "10.0.0.1", 00:17:19.879 "trsvcid": "59490" 00:17:19.879 }, 00:17:19.879 "auth": { 00:17:19.879 "state": "completed", 00:17:19.879 "digest": "sha384", 00:17:19.879 "dhgroup": "ffdhe6144" 00:17:19.879 } 00:17:19.879 } 00:17:19.879 ]' 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.879 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.137 01:22:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:17:20.703 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.703 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.703 01:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.703 01:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.703 01:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.703 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.703 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.704 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.962 01:22:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.219 00:17:21.219 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.219 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.219 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.477 { 00:17:21.477 "cntlid": 87, 00:17:21.477 "qid": 0, 00:17:21.477 "state": "enabled", 00:17:21.477 "thread": "nvmf_tgt_poll_group_000", 00:17:21.477 "listen_address": { 00:17:21.477 "trtype": "TCP", 00:17:21.477 "adrfam": "IPv4", 00:17:21.477 "traddr": "10.0.0.2", 00:17:21.477 "trsvcid": "4420" 00:17:21.477 }, 00:17:21.477 "peer_address": { 00:17:21.477 "trtype": "TCP", 00:17:21.477 "adrfam": "IPv4", 00:17:21.477 "traddr": "10.0.0.1", 00:17:21.477 "trsvcid": "59524" 00:17:21.477 }, 00:17:21.477 "auth": { 00:17:21.477 "state": "completed", 00:17:21.477 "digest": "sha384", 00:17:21.477 "dhgroup": "ffdhe6144" 00:17:21.477 } 00:17:21.477 } 00:17:21.477 ]' 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.477 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.735 01:22:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:17:22.301 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.301 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.301 01:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.301 01:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.301 01:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.301 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.301 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.302 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.868 00:17:22.868 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.868 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.868 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.126 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.126 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.126 01:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.126 01:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.126 01:22:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.126 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.126 { 00:17:23.126 "cntlid": 89, 00:17:23.126 "qid": 0, 00:17:23.126 "state": "enabled", 00:17:23.126 "thread": "nvmf_tgt_poll_group_000", 00:17:23.126 "listen_address": { 00:17:23.126 "trtype": "TCP", 00:17:23.126 "adrfam": "IPv4", 00:17:23.126 "traddr": "10.0.0.2", 00:17:23.126 "trsvcid": "4420" 00:17:23.126 }, 00:17:23.126 "peer_address": { 00:17:23.126 "trtype": "TCP", 00:17:23.126 "adrfam": "IPv4", 00:17:23.126 "traddr": "10.0.0.1", 00:17:23.126 "trsvcid": "59550" 00:17:23.126 }, 00:17:23.126 "auth": { 00:17:23.126 "state": "completed", 00:17:23.126 "digest": "sha384", 00:17:23.126 "dhgroup": "ffdhe8192" 00:17:23.126 } 00:17:23.126 } 00:17:23.126 ]' 00:17:23.126 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.126 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.126 01:22:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.126 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.126 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.126 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.126 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.126 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.385 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:17:23.950 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.950 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.950 01:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.950 01:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.950 01:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.950 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.950 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.950 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.208 01:22:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.466 00:17:24.723 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.723 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.723 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.723 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.723 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.723 01:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.723 01:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.723 01:22:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.723 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.723 { 00:17:24.723 "cntlid": 91, 00:17:24.723 "qid": 0, 00:17:24.723 "state": "enabled", 00:17:24.723 "thread": "nvmf_tgt_poll_group_000", 00:17:24.723 "listen_address": { 00:17:24.724 "trtype": "TCP", 00:17:24.724 "adrfam": "IPv4", 00:17:24.724 "traddr": "10.0.0.2", 00:17:24.724 "trsvcid": "4420" 00:17:24.724 }, 00:17:24.724 "peer_address": { 00:17:24.724 "trtype": "TCP", 00:17:24.724 "adrfam": "IPv4", 00:17:24.724 "traddr": "10.0.0.1", 00:17:24.724 "trsvcid": "59580" 00:17:24.724 }, 00:17:24.724 "auth": { 00:17:24.724 "state": "completed", 00:17:24.724 "digest": "sha384", 00:17:24.724 "dhgroup": "ffdhe8192" 00:17:24.724 } 00:17:24.724 } 00:17:24.724 ]' 00:17:24.724 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.724 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.724 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.981 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.981 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.981 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.981 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.981 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.981 01:22:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:17:25.546 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.546 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:25.546 01:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.546 01:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.546 01:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.546 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.546 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.546 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.804 01:22:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.371 00:17:26.371 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.371 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.371 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.371 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.371 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.371 01:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.371 01:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.371 01:22:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.371 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.371 { 00:17:26.371 "cntlid": 93, 00:17:26.371 "qid": 0, 00:17:26.371 "state": "enabled", 00:17:26.371 "thread": "nvmf_tgt_poll_group_000", 00:17:26.371 "listen_address": { 00:17:26.371 "trtype": "TCP", 00:17:26.371 "adrfam": "IPv4", 00:17:26.371 "traddr": "10.0.0.2", 00:17:26.371 "trsvcid": "4420" 00:17:26.371 }, 00:17:26.371 "peer_address": { 00:17:26.371 "trtype": "TCP", 00:17:26.371 "adrfam": "IPv4", 00:17:26.371 "traddr": "10.0.0.1", 00:17:26.371 "trsvcid": "59598" 00:17:26.371 }, 00:17:26.371 "auth": { 00:17:26.371 "state": "completed", 00:17:26.371 "digest": "sha384", 00:17:26.371 "dhgroup": "ffdhe8192" 00:17:26.371 } 00:17:26.371 } 00:17:26.371 ]' 00:17:26.371 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.371 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.628 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.628 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.628 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.628 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.628 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.628 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.628 01:22:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:17:27.194 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.194 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.194 01:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.194 01:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.194 01:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.194 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.194 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.194 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.453 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.021 00:17:28.021 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.021 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.021 01:22:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.280 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.280 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.280 01:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.280 01:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.280 01:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.281 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.281 { 00:17:28.281 "cntlid": 95, 00:17:28.281 "qid": 0, 00:17:28.281 "state": "enabled", 00:17:28.281 "thread": "nvmf_tgt_poll_group_000", 00:17:28.281 "listen_address": { 00:17:28.281 "trtype": "TCP", 00:17:28.281 "adrfam": "IPv4", 00:17:28.281 "traddr": "10.0.0.2", 00:17:28.281 "trsvcid": "4420" 00:17:28.281 }, 00:17:28.281 "peer_address": { 00:17:28.281 "trtype": "TCP", 00:17:28.281 "adrfam": "IPv4", 00:17:28.281 "traddr": "10.0.0.1", 00:17:28.281 "trsvcid": "59628" 00:17:28.281 }, 00:17:28.281 "auth": { 00:17:28.281 "state": "completed", 00:17:28.281 "digest": "sha384", 00:17:28.281 "dhgroup": "ffdhe8192" 00:17:28.281 } 00:17:28.281 } 00:17:28.281 ]' 00:17:28.281 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.281 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.281 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.281 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.281 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.281 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.281 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.281 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.547 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:17:29.115 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.115 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:29.115 01:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.115 01:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.115 01:22:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.115 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:29.115 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.115 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.115 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:29.115 01:22:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.115 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.374 00:17:29.374 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.374 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.374 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.633 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.633 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.633 01:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.633 01:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.633 01:22:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.633 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.633 { 00:17:29.633 "cntlid": 97, 00:17:29.633 "qid": 0, 00:17:29.633 "state": "enabled", 00:17:29.633 "thread": "nvmf_tgt_poll_group_000", 00:17:29.633 "listen_address": { 00:17:29.633 "trtype": "TCP", 00:17:29.633 "adrfam": "IPv4", 00:17:29.633 "traddr": "10.0.0.2", 00:17:29.633 "trsvcid": "4420" 00:17:29.633 }, 00:17:29.633 "peer_address": { 00:17:29.633 "trtype": "TCP", 00:17:29.633 "adrfam": "IPv4", 00:17:29.633 "traddr": "10.0.0.1", 00:17:29.633 "trsvcid": "36322" 00:17:29.633 }, 00:17:29.633 "auth": { 00:17:29.633 "state": "completed", 00:17:29.633 "digest": "sha512", 00:17:29.633 "dhgroup": "null" 00:17:29.633 } 00:17:29.633 } 00:17:29.633 ]' 00:17:29.633 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.633 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.633 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.633 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:29.633 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.891 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.891 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.891 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.891 01:22:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:17:30.459 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.459 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.459 01:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.459 01:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.459 01:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.459 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.459 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.459 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.718 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.977 00:17:30.977 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.977 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.977 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.977 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.977 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.977 01:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.977 01:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.977 01:22:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.977 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.977 { 00:17:30.977 "cntlid": 99, 00:17:30.977 "qid": 0, 00:17:30.977 "state": "enabled", 00:17:30.977 "thread": "nvmf_tgt_poll_group_000", 00:17:30.977 "listen_address": { 00:17:30.977 "trtype": "TCP", 00:17:30.977 "adrfam": "IPv4", 00:17:30.977 "traddr": "10.0.0.2", 00:17:30.977 "trsvcid": "4420" 00:17:30.977 }, 00:17:30.977 "peer_address": { 00:17:30.977 "trtype": "TCP", 00:17:30.977 "adrfam": "IPv4", 00:17:30.977 "traddr": "10.0.0.1", 00:17:30.977 "trsvcid": "36342" 00:17:30.977 }, 00:17:30.977 "auth": { 00:17:30.977 "state": "completed", 00:17:30.978 "digest": "sha512", 00:17:30.978 "dhgroup": "null" 00:17:30.978 } 00:17:30.978 } 00:17:30.978 ]' 00:17:30.978 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.235 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.235 01:22:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.235 01:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:31.235 01:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.235 01:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.235 01:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.235 01:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.493 01:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:17:32.060 01:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.060 01:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.060 01:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.060 01:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.060 01:22:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.060 01:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.060 01:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.060 01:22:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.060 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.319 00:17:32.319 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.319 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.319 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.577 { 00:17:32.577 "cntlid": 101, 00:17:32.577 "qid": 0, 00:17:32.577 "state": "enabled", 00:17:32.577 "thread": "nvmf_tgt_poll_group_000", 00:17:32.577 "listen_address": { 00:17:32.577 "trtype": "TCP", 00:17:32.577 "adrfam": "IPv4", 00:17:32.577 "traddr": "10.0.0.2", 00:17:32.577 "trsvcid": "4420" 00:17:32.577 }, 00:17:32.577 "peer_address": { 00:17:32.577 "trtype": "TCP", 00:17:32.577 "adrfam": "IPv4", 00:17:32.577 "traddr": "10.0.0.1", 00:17:32.577 "trsvcid": "36356" 00:17:32.577 }, 00:17:32.577 "auth": { 00:17:32.577 "state": "completed", 00:17:32.577 "digest": "sha512", 00:17:32.577 "dhgroup": "null" 00:17:32.577 } 00:17:32.577 } 00:17:32.577 ]' 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.577 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.837 01:22:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.482 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.739 00:17:33.739 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.739 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.739 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.996 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.996 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.996 01:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.996 01:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.996 01:22:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.996 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.996 { 00:17:33.996 "cntlid": 103, 00:17:33.996 "qid": 0, 00:17:33.996 "state": "enabled", 00:17:33.996 "thread": "nvmf_tgt_poll_group_000", 00:17:33.996 "listen_address": { 00:17:33.996 "trtype": "TCP", 00:17:33.996 "adrfam": "IPv4", 00:17:33.996 "traddr": "10.0.0.2", 00:17:33.997 "trsvcid": "4420" 00:17:33.997 }, 00:17:33.997 "peer_address": { 00:17:33.997 "trtype": "TCP", 00:17:33.997 "adrfam": "IPv4", 00:17:33.997 "traddr": "10.0.0.1", 00:17:33.997 "trsvcid": "36392" 00:17:33.997 }, 00:17:33.997 "auth": { 00:17:33.997 "state": "completed", 00:17:33.997 "digest": "sha512", 00:17:33.997 "dhgroup": "null" 00:17:33.997 } 00:17:33.997 } 00:17:33.997 ]' 00:17:33.997 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.997 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.997 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.997 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:33.997 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.254 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.254 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.254 01:22:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.254 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:17:34.819 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.819 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:34.819 01:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.819 01:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.819 01:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.819 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.819 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.819 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.819 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.078 01:23:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.336 00:17:35.336 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.336 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.336 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.594 { 00:17:35.594 "cntlid": 105, 00:17:35.594 "qid": 0, 00:17:35.594 "state": "enabled", 00:17:35.594 "thread": "nvmf_tgt_poll_group_000", 00:17:35.594 "listen_address": { 00:17:35.594 "trtype": "TCP", 00:17:35.594 "adrfam": "IPv4", 00:17:35.594 "traddr": "10.0.0.2", 00:17:35.594 "trsvcid": "4420" 00:17:35.594 }, 00:17:35.594 "peer_address": { 00:17:35.594 "trtype": "TCP", 00:17:35.594 "adrfam": "IPv4", 00:17:35.594 "traddr": "10.0.0.1", 00:17:35.594 "trsvcid": "36408" 00:17:35.594 }, 00:17:35.594 "auth": { 00:17:35.594 "state": "completed", 00:17:35.594 "digest": "sha512", 00:17:35.594 "dhgroup": "ffdhe2048" 00:17:35.594 } 00:17:35.594 } 00:17:35.594 ]' 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.594 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.851 01:23:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.416 01:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.674 01:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.674 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.674 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.674 00:17:36.674 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.674 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.674 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.932 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.932 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.932 01:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.932 01:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.932 01:23:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.932 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.932 { 00:17:36.932 "cntlid": 107, 00:17:36.932 "qid": 0, 00:17:36.932 "state": "enabled", 00:17:36.932 "thread": "nvmf_tgt_poll_group_000", 00:17:36.932 "listen_address": { 00:17:36.932 "trtype": "TCP", 00:17:36.932 "adrfam": "IPv4", 00:17:36.932 "traddr": "10.0.0.2", 00:17:36.932 "trsvcid": "4420" 00:17:36.932 }, 00:17:36.932 "peer_address": { 00:17:36.932 "trtype": "TCP", 00:17:36.932 "adrfam": "IPv4", 00:17:36.932 "traddr": "10.0.0.1", 00:17:36.932 "trsvcid": "36446" 00:17:36.932 }, 00:17:36.932 "auth": { 00:17:36.932 "state": "completed", 00:17:36.932 "digest": "sha512", 00:17:36.932 "dhgroup": "ffdhe2048" 00:17:36.932 } 00:17:36.932 } 00:17:36.932 ]' 00:17:36.932 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.932 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.932 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.932 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.932 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.189 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.189 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.189 01:23:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.189 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:17:37.750 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.751 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:37.751 01:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.751 01:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.751 01:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.751 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.751 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:37.751 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.007 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:38.007 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.007 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:38.007 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:38.007 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:38.007 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.007 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.008 01:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.008 01:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.008 01:23:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.008 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.008 01:23:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.265 00:17:38.265 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.265 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.265 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.522 { 00:17:38.522 "cntlid": 109, 00:17:38.522 "qid": 0, 00:17:38.522 "state": "enabled", 00:17:38.522 "thread": "nvmf_tgt_poll_group_000", 00:17:38.522 "listen_address": { 00:17:38.522 "trtype": "TCP", 00:17:38.522 "adrfam": "IPv4", 00:17:38.522 "traddr": "10.0.0.2", 00:17:38.522 "trsvcid": "4420" 00:17:38.522 }, 00:17:38.522 "peer_address": { 00:17:38.522 "trtype": "TCP", 00:17:38.522 "adrfam": "IPv4", 00:17:38.522 "traddr": "10.0.0.1", 00:17:38.522 "trsvcid": "57294" 00:17:38.522 }, 00:17:38.522 "auth": { 00:17:38.522 "state": "completed", 00:17:38.522 "digest": "sha512", 00:17:38.522 "dhgroup": "ffdhe2048" 00:17:38.522 } 00:17:38.522 } 00:17:38.522 ]' 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.522 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.800 01:23:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.364 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.621 00:17:39.621 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.621 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.621 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.878 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.878 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.878 01:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.878 01:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.878 01:23:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.878 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.878 { 00:17:39.878 "cntlid": 111, 00:17:39.878 "qid": 0, 00:17:39.878 "state": "enabled", 00:17:39.878 "thread": "nvmf_tgt_poll_group_000", 00:17:39.878 "listen_address": { 00:17:39.878 "trtype": "TCP", 00:17:39.878 "adrfam": "IPv4", 00:17:39.878 "traddr": "10.0.0.2", 00:17:39.878 "trsvcid": "4420" 00:17:39.878 }, 00:17:39.878 "peer_address": { 00:17:39.878 "trtype": "TCP", 00:17:39.878 "adrfam": "IPv4", 00:17:39.878 "traddr": "10.0.0.1", 00:17:39.879 "trsvcid": "57320" 00:17:39.879 }, 00:17:39.879 "auth": { 00:17:39.879 "state": "completed", 00:17:39.879 "digest": "sha512", 00:17:39.879 "dhgroup": "ffdhe2048" 00:17:39.879 } 00:17:39.879 } 00:17:39.879 ]' 00:17:39.879 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.879 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.879 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.879 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.879 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.137 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.137 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.137 01:23:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.137 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:17:40.704 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.704 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:40.704 01:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.704 01:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.704 01:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.704 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.704 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.704 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.704 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.962 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:40.963 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.963 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.963 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:40.963 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:40.963 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.963 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.963 01:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.963 01:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.963 01:23:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.963 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.963 01:23:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.221 00:17:41.221 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.221 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.221 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.221 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.221 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.221 01:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.221 01:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.221 01:23:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.221 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.221 { 00:17:41.221 "cntlid": 113, 00:17:41.221 "qid": 0, 00:17:41.221 "state": "enabled", 00:17:41.221 "thread": "nvmf_tgt_poll_group_000", 00:17:41.221 "listen_address": { 00:17:41.221 "trtype": "TCP", 00:17:41.221 "adrfam": "IPv4", 00:17:41.221 "traddr": "10.0.0.2", 00:17:41.221 "trsvcid": "4420" 00:17:41.221 }, 00:17:41.221 "peer_address": { 00:17:41.221 "trtype": "TCP", 00:17:41.221 "adrfam": "IPv4", 00:17:41.221 "traddr": "10.0.0.1", 00:17:41.221 "trsvcid": "57342" 00:17:41.221 }, 00:17:41.221 "auth": { 00:17:41.221 "state": "completed", 00:17:41.221 "digest": "sha512", 00:17:41.221 "dhgroup": "ffdhe3072" 00:17:41.221 } 00:17:41.221 } 00:17:41.221 ]' 00:17:41.221 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.479 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.479 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.479 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.479 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.479 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.479 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.479 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.737 01:23:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.302 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.560 00:17:42.560 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.560 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.560 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.818 { 00:17:42.818 "cntlid": 115, 00:17:42.818 "qid": 0, 00:17:42.818 "state": "enabled", 00:17:42.818 "thread": "nvmf_tgt_poll_group_000", 00:17:42.818 "listen_address": { 00:17:42.818 "trtype": "TCP", 00:17:42.818 "adrfam": "IPv4", 00:17:42.818 "traddr": "10.0.0.2", 00:17:42.818 "trsvcid": "4420" 00:17:42.818 }, 00:17:42.818 "peer_address": { 00:17:42.818 "trtype": "TCP", 00:17:42.818 "adrfam": "IPv4", 00:17:42.818 "traddr": "10.0.0.1", 00:17:42.818 "trsvcid": "57374" 00:17:42.818 }, 00:17:42.818 "auth": { 00:17:42.818 "state": "completed", 00:17:42.818 "digest": "sha512", 00:17:42.818 "dhgroup": "ffdhe3072" 00:17:42.818 } 00:17:42.818 } 00:17:42.818 ]' 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.818 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.076 01:23:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:17:43.642 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.642 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:43.642 01:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.642 01:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.642 01:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.642 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.642 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.642 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:43.900 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:43.900 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.900 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.900 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:43.900 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:43.900 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.900 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.900 01:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.900 01:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.900 01:23:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.900 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.901 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.158 00:17:44.158 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.158 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.158 01:23:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.158 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.158 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.158 01:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.158 01:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.416 01:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.416 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.416 { 00:17:44.416 "cntlid": 117, 00:17:44.416 "qid": 0, 00:17:44.416 "state": "enabled", 00:17:44.416 "thread": "nvmf_tgt_poll_group_000", 00:17:44.416 "listen_address": { 00:17:44.416 "trtype": "TCP", 00:17:44.416 "adrfam": "IPv4", 00:17:44.416 "traddr": "10.0.0.2", 00:17:44.416 "trsvcid": "4420" 00:17:44.416 }, 00:17:44.416 "peer_address": { 00:17:44.416 "trtype": "TCP", 00:17:44.416 "adrfam": "IPv4", 00:17:44.416 "traddr": "10.0.0.1", 00:17:44.416 "trsvcid": "57408" 00:17:44.416 }, 00:17:44.416 "auth": { 00:17:44.416 "state": "completed", 00:17:44.416 "digest": "sha512", 00:17:44.416 "dhgroup": "ffdhe3072" 00:17:44.416 } 00:17:44.416 } 00:17:44.416 ]' 00:17:44.416 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.416 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.416 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.416 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.416 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.416 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.416 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.416 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.675 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:17:45.241 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.241 01:23:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:45.241 01:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.241 01:23:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.241 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.242 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.500 00:17:45.500 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.500 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.500 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.758 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.758 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.758 01:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.758 01:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.758 01:23:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.758 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.758 { 00:17:45.758 "cntlid": 119, 00:17:45.758 "qid": 0, 00:17:45.758 "state": "enabled", 00:17:45.758 "thread": "nvmf_tgt_poll_group_000", 00:17:45.758 "listen_address": { 00:17:45.758 "trtype": "TCP", 00:17:45.758 "adrfam": "IPv4", 00:17:45.758 "traddr": "10.0.0.2", 00:17:45.758 "trsvcid": "4420" 00:17:45.758 }, 00:17:45.758 "peer_address": { 00:17:45.758 "trtype": "TCP", 00:17:45.758 "adrfam": "IPv4", 00:17:45.758 "traddr": "10.0.0.1", 00:17:45.758 "trsvcid": "57420" 00:17:45.758 }, 00:17:45.758 "auth": { 00:17:45.758 "state": "completed", 00:17:45.758 "digest": "sha512", 00:17:45.758 "dhgroup": "ffdhe3072" 00:17:45.758 } 00:17:45.758 } 00:17:45.758 ]' 00:17:45.758 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.758 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.758 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.758 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:45.758 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.017 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.017 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.017 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.017 01:23:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:17:46.584 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.584 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:46.584 01:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.584 01:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.584 01:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.584 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.584 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.584 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.584 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.843 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.102 00:17:47.102 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.102 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.102 01:23:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.390 { 00:17:47.390 "cntlid": 121, 00:17:47.390 "qid": 0, 00:17:47.390 "state": "enabled", 00:17:47.390 "thread": "nvmf_tgt_poll_group_000", 00:17:47.390 "listen_address": { 00:17:47.390 "trtype": "TCP", 00:17:47.390 "adrfam": "IPv4", 00:17:47.390 "traddr": "10.0.0.2", 00:17:47.390 "trsvcid": "4420" 00:17:47.390 }, 00:17:47.390 "peer_address": { 00:17:47.390 "trtype": "TCP", 00:17:47.390 "adrfam": "IPv4", 00:17:47.390 "traddr": "10.0.0.1", 00:17:47.390 "trsvcid": "57452" 00:17:47.390 }, 00:17:47.390 "auth": { 00:17:47.390 "state": "completed", 00:17:47.390 "digest": "sha512", 00:17:47.390 "dhgroup": "ffdhe4096" 00:17:47.390 } 00:17:47.390 } 00:17:47.390 ]' 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.390 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.652 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:17:48.219 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.219 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:48.219 01:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.219 01:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.219 01:23:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.219 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.219 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.220 01:23:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.220 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.479 00:17:48.479 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.479 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.479 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.737 { 00:17:48.737 "cntlid": 123, 00:17:48.737 "qid": 0, 00:17:48.737 "state": "enabled", 00:17:48.737 "thread": "nvmf_tgt_poll_group_000", 00:17:48.737 "listen_address": { 00:17:48.737 "trtype": "TCP", 00:17:48.737 "adrfam": "IPv4", 00:17:48.737 "traddr": "10.0.0.2", 00:17:48.737 "trsvcid": "4420" 00:17:48.737 }, 00:17:48.737 "peer_address": { 00:17:48.737 "trtype": "TCP", 00:17:48.737 "adrfam": "IPv4", 00:17:48.737 "traddr": "10.0.0.1", 00:17:48.737 "trsvcid": "58572" 00:17:48.737 }, 00:17:48.737 "auth": { 00:17:48.737 "state": "completed", 00:17:48.737 "digest": "sha512", 00:17:48.737 "dhgroup": "ffdhe4096" 00:17:48.737 } 00:17:48.737 } 00:17:48.737 ]' 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.737 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.996 01:23:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:17:49.564 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.564 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:49.564 01:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.564 01:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.564 01:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.564 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.564 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.564 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.823 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.082 00:17:50.082 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.082 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.082 01:23:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.341 { 00:17:50.341 "cntlid": 125, 00:17:50.341 "qid": 0, 00:17:50.341 "state": "enabled", 00:17:50.341 "thread": "nvmf_tgt_poll_group_000", 00:17:50.341 "listen_address": { 00:17:50.341 "trtype": "TCP", 00:17:50.341 "adrfam": "IPv4", 00:17:50.341 "traddr": "10.0.0.2", 00:17:50.341 "trsvcid": "4420" 00:17:50.341 }, 00:17:50.341 "peer_address": { 00:17:50.341 "trtype": "TCP", 00:17:50.341 "adrfam": "IPv4", 00:17:50.341 "traddr": "10.0.0.1", 00:17:50.341 "trsvcid": "58592" 00:17:50.341 }, 00:17:50.341 "auth": { 00:17:50.341 "state": "completed", 00:17:50.341 "digest": "sha512", 00:17:50.341 "dhgroup": "ffdhe4096" 00:17:50.341 } 00:17:50.341 } 00:17:50.341 ]' 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.341 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.600 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:17:51.168 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.168 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:51.168 01:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.168 01:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.168 01:23:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.168 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.168 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.168 01:23:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.427 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.685 00:17:51.685 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.685 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.685 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.685 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.685 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.685 01:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.685 01:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.685 01:23:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.685 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.685 { 00:17:51.685 "cntlid": 127, 00:17:51.685 "qid": 0, 00:17:51.685 "state": "enabled", 00:17:51.685 "thread": "nvmf_tgt_poll_group_000", 00:17:51.685 "listen_address": { 00:17:51.685 "trtype": "TCP", 00:17:51.685 "adrfam": "IPv4", 00:17:51.685 "traddr": "10.0.0.2", 00:17:51.685 "trsvcid": "4420" 00:17:51.685 }, 00:17:51.685 "peer_address": { 00:17:51.685 "trtype": "TCP", 00:17:51.685 "adrfam": "IPv4", 00:17:51.685 "traddr": "10.0.0.1", 00:17:51.685 "trsvcid": "58616" 00:17:51.685 }, 00:17:51.685 "auth": { 00:17:51.685 "state": "completed", 00:17:51.685 "digest": "sha512", 00:17:51.686 "dhgroup": "ffdhe4096" 00:17:51.686 } 00:17:51.686 } 00:17:51.686 ]' 00:17:51.686 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.686 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.686 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.944 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:51.944 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.944 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.944 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.944 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.944 01:23:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:17:52.512 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.512 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:52.512 01:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.512 01:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.512 01:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.512 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.512 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.512 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.512 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.770 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:52.770 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.770 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.770 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:52.770 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:52.770 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.770 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.770 01:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.770 01:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.770 01:23:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.770 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.771 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.028 00:17:53.028 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.028 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.028 01:23:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.287 { 00:17:53.287 "cntlid": 129, 00:17:53.287 "qid": 0, 00:17:53.287 "state": "enabled", 00:17:53.287 "thread": "nvmf_tgt_poll_group_000", 00:17:53.287 "listen_address": { 00:17:53.287 "trtype": "TCP", 00:17:53.287 "adrfam": "IPv4", 00:17:53.287 "traddr": "10.0.0.2", 00:17:53.287 "trsvcid": "4420" 00:17:53.287 }, 00:17:53.287 "peer_address": { 00:17:53.287 "trtype": "TCP", 00:17:53.287 "adrfam": "IPv4", 00:17:53.287 "traddr": "10.0.0.1", 00:17:53.287 "trsvcid": "58650" 00:17:53.287 }, 00:17:53.287 "auth": { 00:17:53.287 "state": "completed", 00:17:53.287 "digest": "sha512", 00:17:53.287 "dhgroup": "ffdhe6144" 00:17:53.287 } 00:17:53.287 } 00:17:53.287 ]' 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.287 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.546 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:17:54.114 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.114 01:23:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.114 01:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.114 01:23:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.114 01:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.114 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.114 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.114 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.373 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.631 00:17:54.631 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.631 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.631 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.890 { 00:17:54.890 "cntlid": 131, 00:17:54.890 "qid": 0, 00:17:54.890 "state": "enabled", 00:17:54.890 "thread": "nvmf_tgt_poll_group_000", 00:17:54.890 "listen_address": { 00:17:54.890 "trtype": "TCP", 00:17:54.890 "adrfam": "IPv4", 00:17:54.890 "traddr": "10.0.0.2", 00:17:54.890 "trsvcid": "4420" 00:17:54.890 }, 00:17:54.890 "peer_address": { 00:17:54.890 "trtype": "TCP", 00:17:54.890 "adrfam": "IPv4", 00:17:54.890 "traddr": "10.0.0.1", 00:17:54.890 "trsvcid": "58682" 00:17:54.890 }, 00:17:54.890 "auth": { 00:17:54.890 "state": "completed", 00:17:54.890 "digest": "sha512", 00:17:54.890 "dhgroup": "ffdhe6144" 00:17:54.890 } 00:17:54.890 } 00:17:54.890 ]' 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.890 01:23:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.148 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:17:55.727 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.727 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:55.727 01:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.727 01:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.727 01:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.727 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.727 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.727 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.033 01:23:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.348 00:17:56.348 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.348 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.348 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.348 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.348 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.348 01:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.348 01:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.348 01:23:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.348 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.348 { 00:17:56.348 "cntlid": 133, 00:17:56.348 "qid": 0, 00:17:56.348 "state": "enabled", 00:17:56.348 "thread": "nvmf_tgt_poll_group_000", 00:17:56.348 "listen_address": { 00:17:56.348 "trtype": "TCP", 00:17:56.348 "adrfam": "IPv4", 00:17:56.348 "traddr": "10.0.0.2", 00:17:56.348 "trsvcid": "4420" 00:17:56.348 }, 00:17:56.348 "peer_address": { 00:17:56.348 "trtype": "TCP", 00:17:56.348 "adrfam": "IPv4", 00:17:56.348 "traddr": "10.0.0.1", 00:17:56.348 "trsvcid": "58718" 00:17:56.348 }, 00:17:56.348 "auth": { 00:17:56.348 "state": "completed", 00:17:56.348 "digest": "sha512", 00:17:56.348 "dhgroup": "ffdhe6144" 00:17:56.348 } 00:17:56.348 } 00:17:56.348 ]' 00:17:56.348 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.607 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.607 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.607 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.607 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.607 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.607 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.607 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.607 01:23:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:17:57.171 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.429 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.686 00:17:57.686 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.686 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.686 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.943 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.943 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.943 01:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.943 01:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.943 01:23:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.943 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.943 { 00:17:57.943 "cntlid": 135, 00:17:57.943 "qid": 0, 00:17:57.943 "state": "enabled", 00:17:57.943 "thread": "nvmf_tgt_poll_group_000", 00:17:57.943 "listen_address": { 00:17:57.943 "trtype": "TCP", 00:17:57.943 "adrfam": "IPv4", 00:17:57.943 "traddr": "10.0.0.2", 00:17:57.943 "trsvcid": "4420" 00:17:57.943 }, 00:17:57.943 "peer_address": { 00:17:57.943 "trtype": "TCP", 00:17:57.943 "adrfam": "IPv4", 00:17:57.943 "traddr": "10.0.0.1", 00:17:57.943 "trsvcid": "58750" 00:17:57.943 }, 00:17:57.943 "auth": { 00:17:57.943 "state": "completed", 00:17:57.943 "digest": "sha512", 00:17:57.943 "dhgroup": "ffdhe6144" 00:17:57.943 } 00:17:57.943 } 00:17:57.943 ]' 00:17:57.943 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.943 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.943 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.943 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:57.943 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.201 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.201 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.201 01:23:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.201 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:17:58.764 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.764 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:58.764 01:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.764 01:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.764 01:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.764 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.764 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.764 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:58.764 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.021 01:23:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.587 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.587 { 00:17:59.587 "cntlid": 137, 00:17:59.587 "qid": 0, 00:17:59.587 "state": "enabled", 00:17:59.587 "thread": "nvmf_tgt_poll_group_000", 00:17:59.587 "listen_address": { 00:17:59.587 "trtype": "TCP", 00:17:59.587 "adrfam": "IPv4", 00:17:59.587 "traddr": "10.0.0.2", 00:17:59.587 "trsvcid": "4420" 00:17:59.587 }, 00:17:59.587 "peer_address": { 00:17:59.587 "trtype": "TCP", 00:17:59.587 "adrfam": "IPv4", 00:17:59.587 "traddr": "10.0.0.1", 00:17:59.587 "trsvcid": "48726" 00:17:59.587 }, 00:17:59.587 "auth": { 00:17:59.587 "state": "completed", 00:17:59.587 "digest": "sha512", 00:17:59.587 "dhgroup": "ffdhe8192" 00:17:59.587 } 00:17:59.587 } 00:17:59.587 ]' 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.587 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.845 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.845 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.845 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.845 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.845 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.845 01:23:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:18:00.409 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.409 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.409 01:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.409 01:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.667 01:23:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.231 00:18:01.231 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.231 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.231 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.490 { 00:18:01.490 "cntlid": 139, 00:18:01.490 "qid": 0, 00:18:01.490 "state": "enabled", 00:18:01.490 "thread": "nvmf_tgt_poll_group_000", 00:18:01.490 "listen_address": { 00:18:01.490 "trtype": "TCP", 00:18:01.490 "adrfam": "IPv4", 00:18:01.490 "traddr": "10.0.0.2", 00:18:01.490 "trsvcid": "4420" 00:18:01.490 }, 00:18:01.490 "peer_address": { 00:18:01.490 "trtype": "TCP", 00:18:01.490 "adrfam": "IPv4", 00:18:01.490 "traddr": "10.0.0.1", 00:18:01.490 "trsvcid": "48770" 00:18:01.490 }, 00:18:01.490 "auth": { 00:18:01.490 "state": "completed", 00:18:01.490 "digest": "sha512", 00:18:01.490 "dhgroup": "ffdhe8192" 00:18:01.490 } 00:18:01.490 } 00:18:01.490 ]' 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.490 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.747 01:23:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTMyN2E5MjgwNWNiODEyY2M5MmE3ZWEwMzRmOWU4ZDVIPGp9: --dhchap-ctrl-secret DHHC-1:02:ZGQ4MDM2ZGJiODdlNWRlMmRkN2E2NDI2ODc5OWZhNWU3NzQyZTk3MDEwMWI2OTQzGm2RVA==: 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.315 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.881 00:18:02.881 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.881 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.881 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.140 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.140 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.140 01:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.140 01:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.140 01:23:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.140 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.140 { 00:18:03.140 "cntlid": 141, 00:18:03.140 "qid": 0, 00:18:03.140 "state": "enabled", 00:18:03.140 "thread": "nvmf_tgt_poll_group_000", 00:18:03.140 "listen_address": { 00:18:03.140 "trtype": "TCP", 00:18:03.140 "adrfam": "IPv4", 00:18:03.140 "traddr": "10.0.0.2", 00:18:03.140 "trsvcid": "4420" 00:18:03.140 }, 00:18:03.140 "peer_address": { 00:18:03.140 "trtype": "TCP", 00:18:03.140 "adrfam": "IPv4", 00:18:03.140 "traddr": "10.0.0.1", 00:18:03.140 "trsvcid": "48792" 00:18:03.140 }, 00:18:03.140 "auth": { 00:18:03.140 "state": "completed", 00:18:03.140 "digest": "sha512", 00:18:03.140 "dhgroup": "ffdhe8192" 00:18:03.140 } 00:18:03.140 } 00:18:03.140 ]' 00:18:03.140 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.140 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.140 01:23:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.140 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.140 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.140 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.140 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.140 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.398 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODNmZjI3ZjNkYmIwNDM0YTE3NDNhMDgyOTI5MTEwM2VjNWJlMmJiNGE2NzdmNmE2jIXSfw==: --dhchap-ctrl-secret DHHC-1:01:ZGJkNWM5YmFlMGYwNTI5OGY2YzU5YWNkNGM2NGM1OTPZAj1o: 00:18:03.965 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.965 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:03.965 01:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.965 01:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.965 01:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.965 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.965 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.965 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.223 01:23:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.480 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.737 { 00:18:04.737 "cntlid": 143, 00:18:04.737 "qid": 0, 00:18:04.737 "state": "enabled", 00:18:04.737 "thread": "nvmf_tgt_poll_group_000", 00:18:04.737 "listen_address": { 00:18:04.737 "trtype": "TCP", 00:18:04.737 "adrfam": "IPv4", 00:18:04.737 "traddr": "10.0.0.2", 00:18:04.737 "trsvcid": "4420" 00:18:04.737 }, 00:18:04.737 "peer_address": { 00:18:04.737 "trtype": "TCP", 00:18:04.737 "adrfam": "IPv4", 00:18:04.737 "traddr": "10.0.0.1", 00:18:04.737 "trsvcid": "48824" 00:18:04.737 }, 00:18:04.737 "auth": { 00:18:04.737 "state": "completed", 00:18:04.737 "digest": "sha512", 00:18:04.737 "dhgroup": "ffdhe8192" 00:18:04.737 } 00:18:04.737 } 00:18:04.737 ]' 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.737 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.995 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.995 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.995 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.995 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.995 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.995 01:23:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:18:05.559 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.559 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:05.559 01:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.559 01:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.559 01:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.559 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:05.559 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:05.559 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:05.559 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:05.559 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:05.559 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.817 01:23:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.382 00:18:06.382 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.382 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.382 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.382 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.382 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.382 01:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.382 01:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.382 01:23:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.382 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.382 { 00:18:06.382 "cntlid": 145, 00:18:06.382 "qid": 0, 00:18:06.382 "state": "enabled", 00:18:06.382 "thread": "nvmf_tgt_poll_group_000", 00:18:06.382 "listen_address": { 00:18:06.382 "trtype": "TCP", 00:18:06.382 "adrfam": "IPv4", 00:18:06.382 "traddr": "10.0.0.2", 00:18:06.382 "trsvcid": "4420" 00:18:06.382 }, 00:18:06.382 "peer_address": { 00:18:06.382 "trtype": "TCP", 00:18:06.382 "adrfam": "IPv4", 00:18:06.382 "traddr": "10.0.0.1", 00:18:06.382 "trsvcid": "48832" 00:18:06.382 }, 00:18:06.382 "auth": { 00:18:06.382 "state": "completed", 00:18:06.382 "digest": "sha512", 00:18:06.382 "dhgroup": "ffdhe8192" 00:18:06.382 } 00:18:06.382 } 00:18:06.382 ]' 00:18:06.382 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.639 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.639 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.639 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:06.639 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.639 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.639 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.639 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.896 01:23:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZWQwY2VkM2M4NTdmNDA4NzkwZDA0ZDU4NjAyZDg2YWNiNzk1MmUxYmRmZGE5YWZjtDgDPA==: --dhchap-ctrl-secret DHHC-1:03:NWZjM2M1MDM3MzQ2NGM5NzdiYzE4YWU1ZGUyODI4NGJmZDQ3YjhlODQ2OTVhNGVkNjdkZDdjZGY1YzI0ZDVmOfBcvD4=: 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:07.460 01:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:07.718 request: 00:18:07.718 { 00:18:07.718 "name": "nvme0", 00:18:07.718 "trtype": "tcp", 00:18:07.718 "traddr": "10.0.0.2", 00:18:07.718 "adrfam": "ipv4", 00:18:07.718 "trsvcid": "4420", 00:18:07.718 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:07.718 "prchk_reftag": false, 00:18:07.718 "prchk_guard": false, 00:18:07.718 "hdgst": false, 00:18:07.718 "ddgst": false, 00:18:07.718 "dhchap_key": "key2", 00:18:07.718 "method": "bdev_nvme_attach_controller", 00:18:07.718 "req_id": 1 00:18:07.718 } 00:18:07.718 Got JSON-RPC error response 00:18:07.718 response: 00:18:07.718 { 00:18:07.718 "code": -5, 00:18:07.718 "message": "Input/output error" 00:18:07.718 } 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:07.718 01:23:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:08.283 request: 00:18:08.283 { 00:18:08.283 "name": "nvme0", 00:18:08.283 "trtype": "tcp", 00:18:08.283 "traddr": "10.0.0.2", 00:18:08.283 "adrfam": "ipv4", 00:18:08.283 "trsvcid": "4420", 00:18:08.283 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:08.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:08.284 "prchk_reftag": false, 00:18:08.284 "prchk_guard": false, 00:18:08.284 "hdgst": false, 00:18:08.284 "ddgst": false, 00:18:08.284 "dhchap_key": "key1", 00:18:08.284 "dhchap_ctrlr_key": "ckey2", 00:18:08.284 "method": "bdev_nvme_attach_controller", 00:18:08.284 "req_id": 1 00:18:08.284 } 00:18:08.284 Got JSON-RPC error response 00:18:08.284 response: 00:18:08.284 { 00:18:08.284 "code": -5, 00:18:08.284 "message": "Input/output error" 00:18:08.284 } 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.284 01:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.542 request: 00:18:08.542 { 00:18:08.542 "name": "nvme0", 00:18:08.542 "trtype": "tcp", 00:18:08.542 "traddr": "10.0.0.2", 00:18:08.542 "adrfam": "ipv4", 00:18:08.542 "trsvcid": "4420", 00:18:08.542 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:08.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:08.542 "prchk_reftag": false, 00:18:08.542 "prchk_guard": false, 00:18:08.542 "hdgst": false, 00:18:08.542 "ddgst": false, 00:18:08.542 "dhchap_key": "key1", 00:18:08.542 "dhchap_ctrlr_key": "ckey1", 00:18:08.542 "method": "bdev_nvme_attach_controller", 00:18:08.542 "req_id": 1 00:18:08.542 } 00:18:08.542 Got JSON-RPC error response 00:18:08.542 response: 00:18:08.542 { 00:18:08.542 "code": -5, 00:18:08.542 "message": "Input/output error" 00:18:08.542 } 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3381771 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3381771 ']' 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3381771 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:08.542 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3381771 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3381771' 00:18:08.801 killing process with pid 3381771 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3381771 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3381771 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3401933 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3401933 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3401933 ']' 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.801 01:23:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3401933 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3401933 ']' 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.733 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.990 01:23:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.555 00:18:10.555 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.555 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.555 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.814 { 00:18:10.814 "cntlid": 1, 00:18:10.814 "qid": 0, 00:18:10.814 "state": "enabled", 00:18:10.814 "thread": "nvmf_tgt_poll_group_000", 00:18:10.814 "listen_address": { 00:18:10.814 "trtype": "TCP", 00:18:10.814 "adrfam": "IPv4", 00:18:10.814 "traddr": "10.0.0.2", 00:18:10.814 "trsvcid": "4420" 00:18:10.814 }, 00:18:10.814 "peer_address": { 00:18:10.814 "trtype": "TCP", 00:18:10.814 "adrfam": "IPv4", 00:18:10.814 "traddr": "10.0.0.1", 00:18:10.814 "trsvcid": "40138" 00:18:10.814 }, 00:18:10.814 "auth": { 00:18:10.814 "state": "completed", 00:18:10.814 "digest": "sha512", 00:18:10.814 "dhgroup": "ffdhe8192" 00:18:10.814 } 00:18:10.814 } 00:18:10.814 ]' 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.814 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.072 01:23:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NTVmYTM0MzExYjMzMTAzZDA3MTUyYmI3NDFhOTFkMTM1NTlhNzNmNjY1ZDJjYmEzYTUwZDRiNTQ4MWIxNjZiYVpMFa0=: 00:18:11.638 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.638 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.638 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.638 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.638 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.638 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:11.638 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.638 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.638 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.638 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:11.638 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.897 request: 00:18:11.897 { 00:18:11.897 "name": "nvme0", 00:18:11.897 "trtype": "tcp", 00:18:11.897 "traddr": "10.0.0.2", 00:18:11.897 "adrfam": "ipv4", 00:18:11.897 "trsvcid": "4420", 00:18:11.897 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:11.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:11.897 "prchk_reftag": false, 00:18:11.897 "prchk_guard": false, 00:18:11.897 "hdgst": false, 00:18:11.897 "ddgst": false, 00:18:11.897 "dhchap_key": "key3", 00:18:11.897 "method": "bdev_nvme_attach_controller", 00:18:11.897 "req_id": 1 00:18:11.897 } 00:18:11.897 Got JSON-RPC error response 00:18:11.897 response: 00:18:11.897 { 00:18:11.897 "code": -5, 00:18:11.897 "message": "Input/output error" 00:18:11.897 } 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:11.897 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:12.156 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.156 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:12.156 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.156 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:12.156 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:12.156 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:12.156 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:12.156 01:23:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.156 01:23:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.414 request: 00:18:12.414 { 00:18:12.414 "name": "nvme0", 00:18:12.414 "trtype": "tcp", 00:18:12.414 "traddr": "10.0.0.2", 00:18:12.414 "adrfam": "ipv4", 00:18:12.414 "trsvcid": "4420", 00:18:12.414 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:12.414 "prchk_reftag": false, 00:18:12.414 "prchk_guard": false, 00:18:12.414 "hdgst": false, 00:18:12.414 "ddgst": false, 00:18:12.414 "dhchap_key": "key3", 00:18:12.414 "method": "bdev_nvme_attach_controller", 00:18:12.414 "req_id": 1 00:18:12.414 } 00:18:12.414 Got JSON-RPC error response 00:18:12.414 response: 00:18:12.414 { 00:18:12.414 "code": -5, 00:18:12.414 "message": "Input/output error" 00:18:12.414 } 00:18:12.414 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:12.414 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:12.414 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:12.414 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:12.414 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:12.414 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:12.414 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:12.414 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.414 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:12.415 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:12.673 request: 00:18:12.673 { 00:18:12.673 "name": "nvme0", 00:18:12.673 "trtype": "tcp", 00:18:12.673 "traddr": "10.0.0.2", 00:18:12.673 "adrfam": "ipv4", 00:18:12.673 "trsvcid": "4420", 00:18:12.673 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:12.673 "prchk_reftag": false, 00:18:12.673 "prchk_guard": false, 00:18:12.673 "hdgst": false, 00:18:12.673 "ddgst": false, 00:18:12.673 "dhchap_key": "key0", 00:18:12.673 "dhchap_ctrlr_key": "key1", 00:18:12.673 "method": "bdev_nvme_attach_controller", 00:18:12.673 "req_id": 1 00:18:12.673 } 00:18:12.673 Got JSON-RPC error response 00:18:12.673 response: 00:18:12.673 { 00:18:12.673 "code": -5, 00:18:12.673 "message": "Input/output error" 00:18:12.673 } 00:18:12.673 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:12.673 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:12.673 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:12.673 01:23:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:12.673 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:12.674 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:12.932 00:18:12.932 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:12.932 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:12.932 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.190 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.190 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.190 01:23:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.190 01:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:13.190 01:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:13.190 01:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3382016 00:18:13.190 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3382016 ']' 00:18:13.190 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3382016 00:18:13.190 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:13.190 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.190 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3382016 00:18:13.449 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:13.449 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:13.449 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3382016' 00:18:13.449 killing process with pid 3382016 00:18:13.449 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3382016 00:18:13.449 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3382016 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.706 rmmod nvme_tcp 00:18:13.706 rmmod nvme_fabrics 00:18:13.706 rmmod nvme_keyring 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3401933 ']' 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3401933 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3401933 ']' 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3401933 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3401933 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3401933' 00:18:13.706 killing process with pid 3401933 00:18:13.706 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3401933 00:18:13.707 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3401933 00:18:13.964 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.964 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.964 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.964 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.964 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.964 01:23:39 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.964 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.964 01:23:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.867 01:23:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.867 01:23:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.I3t /tmp/spdk.key-sha256.mOy /tmp/spdk.key-sha384.S1F /tmp/spdk.key-sha512.Atf /tmp/spdk.key-sha512.7BO /tmp/spdk.key-sha384.aO3 /tmp/spdk.key-sha256.2x2 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:16.127 00:18:16.127 real 2m7.477s 00:18:16.127 user 4m53.612s 00:18:16.127 sys 0m19.863s 00:18:16.127 01:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:16.127 01:23:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.127 ************************************ 00:18:16.127 END TEST nvmf_auth_target 00:18:16.127 ************************************ 00:18:16.127 01:23:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:16.127 01:23:41 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:16.127 01:23:41 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:16.127 01:23:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:16.127 01:23:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:16.127 01:23:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:16.127 ************************************ 00:18:16.127 START TEST nvmf_bdevio_no_huge 00:18:16.127 ************************************ 00:18:16.127 01:23:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:16.127 * Looking for test storage... 00:18:16.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:16.127 01:23:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:21.432 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:21.432 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:21.432 Found net devices under 0000:86:00.0: cvl_0_0 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:21.432 Found net devices under 0000:86:00.1: cvl_0_1 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.432 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:21.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:21.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:18:21.692 00:18:21.692 --- 10.0.0.2 ping statistics --- 00:18:21.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.692 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:21.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:21.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:18:21.692 00:18:21.692 --- 10.0.0.1 ping statistics --- 00:18:21.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.692 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3406197 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3406197 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3406197 ']' 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:21.692 01:23:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.951 [2024-07-16 01:23:47.716446] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:18:21.951 [2024-07-16 01:23:47.716491] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:21.951 [2024-07-16 01:23:47.778464] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:21.951 [2024-07-16 01:23:47.860063] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.951 [2024-07-16 01:23:47.860098] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.951 [2024-07-16 01:23:47.860105] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.951 [2024-07-16 01:23:47.860111] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.951 [2024-07-16 01:23:47.860116] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.951 [2024-07-16 01:23:47.860224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:21.951 [2024-07-16 01:23:47.860365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:21.951 [2024-07-16 01:23:47.860471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:21.951 [2024-07-16 01:23:47.860472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.886 [2024-07-16 01:23:48.559191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.886 Malloc0 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:22.886 [2024-07-16 01:23:48.599460] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:22.886 { 00:18:22.886 "params": { 00:18:22.886 "name": "Nvme$subsystem", 00:18:22.886 "trtype": "$TEST_TRANSPORT", 00:18:22.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.886 "adrfam": "ipv4", 00:18:22.886 "trsvcid": "$NVMF_PORT", 00:18:22.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.886 "hdgst": ${hdgst:-false}, 00:18:22.886 "ddgst": ${ddgst:-false} 00:18:22.886 }, 00:18:22.886 "method": "bdev_nvme_attach_controller" 00:18:22.886 } 00:18:22.886 EOF 00:18:22.886 )") 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:22.886 01:23:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:22.886 "params": { 00:18:22.886 "name": "Nvme1", 00:18:22.886 "trtype": "tcp", 00:18:22.886 "traddr": "10.0.0.2", 00:18:22.886 "adrfam": "ipv4", 00:18:22.886 "trsvcid": "4420", 00:18:22.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.886 "hdgst": false, 00:18:22.886 "ddgst": false 00:18:22.886 }, 00:18:22.887 "method": "bdev_nvme_attach_controller" 00:18:22.887 }' 00:18:22.887 [2024-07-16 01:23:48.647452] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:18:22.887 [2024-07-16 01:23:48.647499] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3406442 ] 00:18:22.887 [2024-07-16 01:23:48.705821] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:22.887 [2024-07-16 01:23:48.789956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.887 [2024-07-16 01:23:48.790054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.887 [2024-07-16 01:23:48.790054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.145 I/O targets: 00:18:23.145 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:23.145 00:18:23.145 00:18:23.145 CUnit - A unit testing framework for C - Version 2.1-3 00:18:23.145 http://cunit.sourceforge.net/ 00:18:23.145 00:18:23.145 00:18:23.145 Suite: bdevio tests on: Nvme1n1 00:18:23.404 Test: blockdev write read block ...passed 00:18:23.404 Test: blockdev write zeroes read block ...passed 00:18:23.404 Test: blockdev write zeroes read no split ...passed 00:18:23.404 Test: blockdev write zeroes read split ...passed 00:18:23.404 Test: blockdev write zeroes read split partial ...passed 00:18:23.404 Test: blockdev reset ...[2024-07-16 01:23:49.252680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:23.404 [2024-07-16 01:23:49.252743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa684f0 (9): Bad file descriptor 00:18:23.404 [2024-07-16 01:23:49.311948] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:23.404 passed 00:18:23.404 Test: blockdev write read 8 blocks ...passed 00:18:23.404 Test: blockdev write read size > 128k ...passed 00:18:23.404 Test: blockdev write read invalid size ...passed 00:18:23.663 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:23.663 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:23.663 Test: blockdev write read max offset ...passed 00:18:23.663 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:23.663 Test: blockdev writev readv 8 blocks ...passed 00:18:23.663 Test: blockdev writev readv 30 x 1block ...passed 00:18:23.663 Test: blockdev writev readv block ...passed 00:18:23.663 Test: blockdev writev readv size > 128k ...passed 00:18:23.663 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:23.663 Test: blockdev comparev and writev ...[2024-07-16 01:23:49.520980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.663 [2024-07-16 01:23:49.521008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:23.663 [2024-07-16 01:23:49.521021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.663 [2024-07-16 01:23:49.521029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:23.663 [2024-07-16 01:23:49.521268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.663 [2024-07-16 01:23:49.521277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:23.663 [2024-07-16 01:23:49.521288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.663 [2024-07-16 01:23:49.521295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:23.663 [2024-07-16 01:23:49.521533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.663 [2024-07-16 01:23:49.521542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:23.663 [2024-07-16 01:23:49.521553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.663 [2024-07-16 01:23:49.521560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:23.663 [2024-07-16 01:23:49.521787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.663 [2024-07-16 01:23:49.521795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:23.663 [2024-07-16 01:23:49.521806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:23.663 [2024-07-16 01:23:49.521812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:23.663 passed 00:18:23.663 Test: blockdev nvme passthru rw ...passed 00:18:23.663 Test: blockdev nvme passthru vendor specific ...[2024-07-16 01:23:49.603749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:23.663 [2024-07-16 01:23:49.603767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:23.663 [2024-07-16 01:23:49.603871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:23.663 [2024-07-16 01:23:49.603880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:23.663 [2024-07-16 01:23:49.603983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:23.663 [2024-07-16 01:23:49.603992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:23.663 [2024-07-16 01:23:49.604090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:23.663 [2024-07-16 01:23:49.604099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:23.663 passed 00:18:23.663 Test: blockdev nvme admin passthru ...passed 00:18:23.922 Test: blockdev copy ...passed 00:18:23.922 00:18:23.922 Run Summary: Type Total Ran Passed Failed Inactive 00:18:23.922 suites 1 1 n/a 0 0 00:18:23.922 tests 23 23 23 0 0 00:18:23.922 asserts 152 152 152 0 n/a 00:18:23.922 00:18:23.922 Elapsed time = 1.142 seconds 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:24.180 01:23:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:24.180 rmmod nvme_tcp 00:18:24.180 rmmod nvme_fabrics 00:18:24.180 rmmod nvme_keyring 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3406197 ']' 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3406197 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3406197 ']' 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3406197 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3406197 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3406197' 00:18:24.180 killing process with pid 3406197 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3406197 00:18:24.180 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3406197 00:18:24.438 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:24.438 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:24.438 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:24.438 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.438 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.438 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.438 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.438 01:23:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.975 01:23:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:26.975 00:18:26.975 real 0m10.510s 00:18:26.975 user 0m13.798s 00:18:26.975 sys 0m5.106s 00:18:26.975 01:23:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.975 01:23:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:26.975 ************************************ 00:18:26.975 END TEST nvmf_bdevio_no_huge 00:18:26.975 ************************************ 00:18:26.975 01:23:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:26.975 01:23:52 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:26.975 01:23:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:26.975 01:23:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.975 01:23:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:26.975 ************************************ 00:18:26.975 START TEST nvmf_tls 00:18:26.975 ************************************ 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:26.975 * Looking for test storage... 00:18:26.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.975 01:23:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.976 01:23:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.976 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:26.976 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:26.976 01:23:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:26.976 01:23:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:32.247 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:32.247 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:32.247 Found net devices under 0000:86:00.0: cvl_0_0 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:32.247 Found net devices under 0000:86:00.1: cvl_0_1 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:32.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:18:32.247 00:18:32.247 --- 10.0.0.2 ping statistics --- 00:18:32.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.247 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:18:32.247 00:18:32.247 --- 10.0.0.1 ping statistics --- 00:18:32.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.247 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:32.247 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:32.248 01:23:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:32.248 01:23:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.248 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3409991 00:18:32.248 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3409991 00:18:32.248 01:23:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:32.248 01:23:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3409991 ']' 00:18:32.248 01:23:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.248 01:23:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.248 01:23:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.248 01:23:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.248 01:23:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.248 [2024-07-16 01:23:57.959565] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:18:32.248 [2024-07-16 01:23:57.959609] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.248 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.248 [2024-07-16 01:23:58.022450] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.248 [2024-07-16 01:23:58.097830] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.248 [2024-07-16 01:23:58.097867] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.248 [2024-07-16 01:23:58.097874] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.248 [2024-07-16 01:23:58.097879] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.248 [2024-07-16 01:23:58.097884] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.248 [2024-07-16 01:23:58.097921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.816 01:23:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.816 01:23:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.816 01:23:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.816 01:23:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.816 01:23:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.816 01:23:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.816 01:23:58 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:32.816 01:23:58 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:33.075 true 00:18:33.075 01:23:58 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.075 01:23:58 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:33.333 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:33.333 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:33.333 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:33.333 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.333 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:33.592 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:33.592 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:33.592 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:33.852 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.852 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:33.852 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:33.852 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:33.852 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.852 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:34.111 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:34.111 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:34.111 01:23:59 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:34.369 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:34.369 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:34.369 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:34.369 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:34.369 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:34.627 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:34.627 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.geN8uW3FNw 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.KJrG5x8T7c 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:34.886 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:34.887 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.geN8uW3FNw 00:18:34.887 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.KJrG5x8T7c 00:18:34.887 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:35.146 01:24:00 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:35.403 01:24:01 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.geN8uW3FNw 00:18:35.403 01:24:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.geN8uW3FNw 00:18:35.403 01:24:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:35.403 [2024-07-16 01:24:01.337702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.403 01:24:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:35.661 01:24:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:35.927 [2024-07-16 01:24:01.662524] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.927 [2024-07-16 01:24:01.662715] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.927 01:24:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:35.927 malloc0 00:18:35.927 01:24:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:36.190 01:24:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.geN8uW3FNw 00:18:36.448 [2024-07-16 01:24:02.192064] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:36.448 01:24:02 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.geN8uW3FNw 00:18:36.448 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.424 Initializing NVMe Controllers 00:18:46.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:46.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:46.424 Initialization complete. Launching workers. 00:18:46.424 ======================================================== 00:18:46.424 Latency(us) 00:18:46.424 Device Information : IOPS MiB/s Average min max 00:18:46.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17000.46 66.41 3763.41 2688.42 8429.45 00:18:46.424 ======================================================== 00:18:46.424 Total : 17000.46 66.41 3763.41 2688.42 8429.45 00:18:46.424 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.geN8uW3FNw 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.geN8uW3FNw' 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3413013 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3413013 /var/tmp/bdevperf.sock 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3413013 ']' 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.424 01:24:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.424 [2024-07-16 01:24:12.330122] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:18:46.424 [2024-07-16 01:24:12.330172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413013 ] 00:18:46.424 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.424 [2024-07-16 01:24:12.379680] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.683 [2024-07-16 01:24:12.457164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.250 01:24:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.250 01:24:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:47.250 01:24:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.geN8uW3FNw 00:18:47.508 [2024-07-16 01:24:13.291167] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.508 [2024-07-16 01:24:13.291233] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:47.508 TLSTESTn1 00:18:47.508 01:24:13 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:47.508 Running I/O for 10 seconds... 00:18:59.712 00:18:59.712 Latency(us) 00:18:59.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.712 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:59.712 Verification LBA range: start 0x0 length 0x2000 00:18:59.712 TLSTESTn1 : 10.02 5624.19 21.97 0.00 0.00 22722.93 4805.97 29085.50 00:18:59.712 =================================================================================================================== 00:18:59.712 Total : 5624.19 21.97 0.00 0.00 22722.93 4805.97 29085.50 00:18:59.712 0 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3413013 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3413013 ']' 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3413013 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3413013 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3413013' 00:18:59.712 killing process with pid 3413013 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3413013 00:18:59.712 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.712 00:18:59.712 Latency(us) 00:18:59.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.712 =================================================================================================================== 00:18:59.712 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.712 [2024-07-16 01:24:23.574940] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3413013 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KJrG5x8T7c 00:18:59.712 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KJrG5x8T7c 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KJrG5x8T7c 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.KJrG5x8T7c' 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3414906 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3414906 /var/tmp/bdevperf.sock 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3414906 ']' 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.713 01:24:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.713 [2024-07-16 01:24:23.803729] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:18:59.713 [2024-07-16 01:24:23.803778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414906 ] 00:18:59.713 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.713 [2024-07-16 01:24:23.852649] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.713 [2024-07-16 01:24:23.929692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KJrG5x8T7c 00:18:59.713 [2024-07-16 01:24:24.750625] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.713 [2024-07-16 01:24:24.750695] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:59.713 [2024-07-16 01:24:24.759810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:59.713 [2024-07-16 01:24:24.760006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250e680 (107): Transport endpoint is not connected 00:18:59.713 [2024-07-16 01:24:24.760999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250e680 (9): Bad file descriptor 00:18:59.713 [2024-07-16 01:24:24.762001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:59.713 [2024-07-16 01:24:24.762010] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:59.713 [2024-07-16 01:24:24.762017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:59.713 request: 00:18:59.713 { 00:18:59.713 "name": "TLSTEST", 00:18:59.713 "trtype": "tcp", 00:18:59.713 "traddr": "10.0.0.2", 00:18:59.713 "adrfam": "ipv4", 00:18:59.713 "trsvcid": "4420", 00:18:59.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.713 "prchk_reftag": false, 00:18:59.713 "prchk_guard": false, 00:18:59.713 "hdgst": false, 00:18:59.713 "ddgst": false, 00:18:59.713 "psk": "/tmp/tmp.KJrG5x8T7c", 00:18:59.713 "method": "bdev_nvme_attach_controller", 00:18:59.713 "req_id": 1 00:18:59.713 } 00:18:59.713 Got JSON-RPC error response 00:18:59.713 response: 00:18:59.713 { 00:18:59.713 "code": -5, 00:18:59.713 "message": "Input/output error" 00:18:59.713 } 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3414906 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3414906 ']' 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3414906 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3414906 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3414906' 00:18:59.713 killing process with pid 3414906 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3414906 00:18:59.713 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.713 00:18:59.713 Latency(us) 00:18:59.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.713 =================================================================================================================== 00:18:59.713 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:59.713 [2024-07-16 01:24:24.832740] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:59.713 01:24:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3414906 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.geN8uW3FNw 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.geN8uW3FNw 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.geN8uW3FNw 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.geN8uW3FNw' 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3415076 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3415076 /var/tmp/bdevperf.sock 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3415076 ']' 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.713 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.713 [2024-07-16 01:24:25.054935] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:18:59.713 [2024-07-16 01:24:25.054984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415076 ] 00:18:59.713 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.713 [2024-07-16 01:24:25.104544] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.713 [2024-07-16 01:24:25.182506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.971 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.971 01:24:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:59.971 01:24:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.geN8uW3FNw 00:19:00.231 [2024-07-16 01:24:26.004433] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:00.231 [2024-07-16 01:24:26.004499] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:00.231 [2024-07-16 01:24:26.012889] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:00.231 [2024-07-16 01:24:26.012910] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:00.231 [2024-07-16 01:24:26.012934] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:00.231 [2024-07-16 01:24:26.013881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149b680 (107): Transport endpoint is not connected 00:19:00.231 [2024-07-16 01:24:26.014875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149b680 (9): Bad file descriptor 00:19:00.231 [2024-07-16 01:24:26.015876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:00.231 [2024-07-16 01:24:26.015885] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:00.231 [2024-07-16 01:24:26.015892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:00.231 request: 00:19:00.231 { 00:19:00.231 "name": "TLSTEST", 00:19:00.231 "trtype": "tcp", 00:19:00.231 "traddr": "10.0.0.2", 00:19:00.231 "adrfam": "ipv4", 00:19:00.231 "trsvcid": "4420", 00:19:00.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:00.231 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:00.231 "prchk_reftag": false, 00:19:00.231 "prchk_guard": false, 00:19:00.231 "hdgst": false, 00:19:00.231 "ddgst": false, 00:19:00.231 "psk": "/tmp/tmp.geN8uW3FNw", 00:19:00.231 "method": "bdev_nvme_attach_controller", 00:19:00.231 "req_id": 1 00:19:00.231 } 00:19:00.231 Got JSON-RPC error response 00:19:00.231 response: 00:19:00.231 { 00:19:00.231 "code": -5, 00:19:00.231 "message": "Input/output error" 00:19:00.231 } 00:19:00.231 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3415076 00:19:00.231 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3415076 ']' 00:19:00.231 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3415076 00:19:00.231 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:00.231 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:00.231 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3415076 00:19:00.231 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:00.231 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:00.231 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3415076' 00:19:00.231 killing process with pid 3415076 00:19:00.231 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3415076 00:19:00.231 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.231 00:19:00.231 Latency(us) 00:19:00.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.231 =================================================================================================================== 00:19:00.231 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:00.231 [2024-07-16 01:24:26.071097] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:00.231 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3415076 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.geN8uW3FNw 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.geN8uW3FNw 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.geN8uW3FNw 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.geN8uW3FNw' 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3415217 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3415217 /var/tmp/bdevperf.sock 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3415217 ']' 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:00.491 01:24:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.491 [2024-07-16 01:24:26.289180] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:00.491 [2024-07-16 01:24:26.289229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415217 ] 00:19:00.491 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.491 [2024-07-16 01:24:26.338875] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.491 [2024-07-16 01:24:26.415640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.geN8uW3FNw 00:19:01.429 [2024-07-16 01:24:27.253996] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.429 [2024-07-16 01:24:27.254061] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:01.429 [2024-07-16 01:24:27.258855] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:01.429 [2024-07-16 01:24:27.258877] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:01.429 [2024-07-16 01:24:27.258901] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:01.429 [2024-07-16 01:24:27.259547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2310680 (107): Transport endpoint is not connected 00:19:01.429 [2024-07-16 01:24:27.260540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2310680 (9): Bad file descriptor 00:19:01.429 [2024-07-16 01:24:27.261542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:01.429 [2024-07-16 01:24:27.261551] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:01.429 [2024-07-16 01:24:27.261557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:01.429 request: 00:19:01.429 { 00:19:01.429 "name": "TLSTEST", 00:19:01.429 "trtype": "tcp", 00:19:01.429 "traddr": "10.0.0.2", 00:19:01.429 "adrfam": "ipv4", 00:19:01.429 "trsvcid": "4420", 00:19:01.429 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:01.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:01.429 "prchk_reftag": false, 00:19:01.429 "prchk_guard": false, 00:19:01.429 "hdgst": false, 00:19:01.429 "ddgst": false, 00:19:01.429 "psk": "/tmp/tmp.geN8uW3FNw", 00:19:01.429 "method": "bdev_nvme_attach_controller", 00:19:01.429 "req_id": 1 00:19:01.429 } 00:19:01.429 Got JSON-RPC error response 00:19:01.429 response: 00:19:01.429 { 00:19:01.429 "code": -5, 00:19:01.429 "message": "Input/output error" 00:19:01.429 } 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3415217 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3415217 ']' 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3415217 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3415217 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3415217' 00:19:01.429 killing process with pid 3415217 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3415217 00:19:01.429 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.429 00:19:01.429 Latency(us) 00:19:01.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.429 =================================================================================================================== 00:19:01.429 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.429 [2024-07-16 01:24:27.334972] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:01.429 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3415217 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3415401 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3415401 /var/tmp/bdevperf.sock 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3415401 ']' 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:01.741 01:24:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.741 [2024-07-16 01:24:27.552529] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:01.741 [2024-07-16 01:24:27.552578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415401 ] 00:19:01.741 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.741 [2024-07-16 01:24:27.605212] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.741 [2024-07-16 01:24:27.674968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:02.693 [2024-07-16 01:24:28.509301] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:02.693 [2024-07-16 01:24:28.510807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2181c70 (9): Bad file descriptor 00:19:02.693 [2024-07-16 01:24:28.511806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:02.693 [2024-07-16 01:24:28.511816] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:02.693 [2024-07-16 01:24:28.511826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:02.693 request: 00:19:02.693 { 00:19:02.693 "name": "TLSTEST", 00:19:02.693 "trtype": "tcp", 00:19:02.693 "traddr": "10.0.0.2", 00:19:02.693 "adrfam": "ipv4", 00:19:02.693 "trsvcid": "4420", 00:19:02.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.693 "prchk_reftag": false, 00:19:02.693 "prchk_guard": false, 00:19:02.693 "hdgst": false, 00:19:02.693 "ddgst": false, 00:19:02.693 "method": "bdev_nvme_attach_controller", 00:19:02.693 "req_id": 1 00:19:02.693 } 00:19:02.693 Got JSON-RPC error response 00:19:02.693 response: 00:19:02.693 { 00:19:02.693 "code": -5, 00:19:02.693 "message": "Input/output error" 00:19:02.693 } 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3415401 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3415401 ']' 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3415401 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3415401 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3415401' 00:19:02.693 killing process with pid 3415401 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3415401 00:19:02.693 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.693 00:19:02.693 Latency(us) 00:19:02.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.693 =================================================================================================================== 00:19:02.693 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:02.693 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3415401 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3409991 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3409991 ']' 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3409991 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3409991 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3409991' 00:19:02.952 killing process with pid 3409991 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3409991 00:19:02.952 [2024-07-16 01:24:28.791273] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:02.952 01:24:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3409991 00:19:03.210 01:24:28 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:03.210 01:24:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:03.210 01:24:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:03.210 01:24:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:03.210 01:24:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:03.210 01:24:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:03.210 01:24:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.FewabtIDrF 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.FewabtIDrF 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3415705 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3415705 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3415705 ']' 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:03.210 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.210 [2024-07-16 01:24:29.087349] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:03.210 [2024-07-16 01:24:29.087395] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.210 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.210 [2024-07-16 01:24:29.150865] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.469 [2024-07-16 01:24:29.238044] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.469 [2024-07-16 01:24:29.238084] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.469 [2024-07-16 01:24:29.238092] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.469 [2024-07-16 01:24:29.238098] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.469 [2024-07-16 01:24:29.238103] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.469 [2024-07-16 01:24:29.238122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.035 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.035 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:04.035 01:24:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:04.035 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:04.035 01:24:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.035 01:24:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.035 01:24:29 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.FewabtIDrF 00:19:04.035 01:24:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FewabtIDrF 00:19:04.035 01:24:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:04.293 [2024-07-16 01:24:30.070784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.293 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:04.293 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:04.551 [2024-07-16 01:24:30.415657] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.551 [2024-07-16 01:24:30.415856] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.551 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:04.810 malloc0 00:19:04.810 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FewabtIDrF 00:19:05.068 [2024-07-16 01:24:30.949538] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FewabtIDrF 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FewabtIDrF' 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3416133 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3416133 /var/tmp/bdevperf.sock 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3416133 ']' 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:05.068 01:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.069 01:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:05.069 01:24:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.069 [2024-07-16 01:24:31.012772] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:05.069 [2024-07-16 01:24:31.012820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416133 ] 00:19:05.069 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.327 [2024-07-16 01:24:31.063313] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.327 [2024-07-16 01:24:31.134981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.900 01:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:05.900 01:24:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:05.900 01:24:31 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FewabtIDrF 00:19:06.158 [2024-07-16 01:24:31.975667] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.158 [2024-07-16 01:24:31.975736] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:06.158 TLSTESTn1 00:19:06.158 01:24:32 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:06.415 Running I/O for 10 seconds... 00:19:16.393 00:19:16.393 Latency(us) 00:19:16.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.393 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:16.393 Verification LBA range: start 0x0 length 0x2000 00:19:16.393 TLSTESTn1 : 10.02 5580.30 21.80 0.00 0.00 22901.48 6553.60 27837.20 00:19:16.393 =================================================================================================================== 00:19:16.393 Total : 5580.30 21.80 0.00 0.00 22901.48 6553.60 27837.20 00:19:16.393 0 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3416133 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3416133 ']' 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3416133 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3416133 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3416133' 00:19:16.393 killing process with pid 3416133 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3416133 00:19:16.393 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.393 00:19:16.393 Latency(us) 00:19:16.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.393 =================================================================================================================== 00:19:16.393 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.393 [2024-07-16 01:24:42.247506] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:16.393 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3416133 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.FewabtIDrF 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FewabtIDrF 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FewabtIDrF 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FewabtIDrF 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FewabtIDrF' 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3417971 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3417971 /var/tmp/bdevperf.sock 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3417971 ']' 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.653 01:24:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.653 [2024-07-16 01:24:42.479735] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:16.653 [2024-07-16 01:24:42.479783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417971 ] 00:19:16.653 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.653 [2024-07-16 01:24:42.529786] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.653 [2024-07-16 01:24:42.595906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FewabtIDrF 00:19:17.589 [2024-07-16 01:24:43.448644] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.589 [2024-07-16 01:24:43.448690] bdev_nvme.c:6133:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:17.589 [2024-07-16 01:24:43.448698] bdev_nvme.c:6238:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.FewabtIDrF 00:19:17.589 request: 00:19:17.589 { 00:19:17.589 "name": "TLSTEST", 00:19:17.589 "trtype": "tcp", 00:19:17.589 "traddr": "10.0.0.2", 00:19:17.589 "adrfam": "ipv4", 00:19:17.589 "trsvcid": "4420", 00:19:17.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.589 "prchk_reftag": false, 00:19:17.589 "prchk_guard": false, 00:19:17.589 "hdgst": false, 00:19:17.589 "ddgst": false, 00:19:17.589 "psk": "/tmp/tmp.FewabtIDrF", 00:19:17.589 "method": "bdev_nvme_attach_controller", 00:19:17.589 "req_id": 1 00:19:17.589 } 00:19:17.589 Got JSON-RPC error response 00:19:17.589 response: 00:19:17.589 { 00:19:17.589 "code": -1, 00:19:17.589 "message": "Operation not permitted" 00:19:17.589 } 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3417971 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3417971 ']' 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3417971 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3417971 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3417971' 00:19:17.589 killing process with pid 3417971 00:19:17.589 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3417971 00:19:17.589 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.589 00:19:17.590 Latency(us) 00:19:17.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.590 =================================================================================================================== 00:19:17.590 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.590 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3417971 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3415705 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3415705 ']' 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3415705 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3415705 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3415705' 00:19:17.849 killing process with pid 3415705 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3415705 00:19:17.849 [2024-07-16 01:24:43.732197] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:17.849 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3415705 00:19:18.107 01:24:43 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:18.107 01:24:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:18.107 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:18.107 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.107 01:24:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3418218 00:19:18.108 01:24:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3418218 00:19:18.108 01:24:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:18.108 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3418218 ']' 00:19:18.108 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.108 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.108 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.108 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.108 01:24:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.108 [2024-07-16 01:24:43.974780] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:18.108 [2024-07-16 01:24:43.974826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.108 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.108 [2024-07-16 01:24:44.032596] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.367 [2024-07-16 01:24:44.108798] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.367 [2024-07-16 01:24:44.108831] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.367 [2024-07-16 01:24:44.108838] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.367 [2024-07-16 01:24:44.108843] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.367 [2024-07-16 01:24:44.108848] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.367 [2024-07-16 01:24:44.108882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.FewabtIDrF 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.FewabtIDrF 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.FewabtIDrF 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FewabtIDrF 00:19:18.933 01:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:19.191 [2024-07-16 01:24:44.961873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.191 01:24:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:19.191 01:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:19.447 [2024-07-16 01:24:45.286703] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.447 [2024-07-16 01:24:45.286875] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.447 01:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:19.705 malloc0 00:19:19.705 01:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:19.705 01:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FewabtIDrF 00:19:19.963 [2024-07-16 01:24:45.775920] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:19.963 [2024-07-16 01:24:45.775945] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:19.963 [2024-07-16 01:24:45.775966] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:19.963 request: 00:19:19.963 { 00:19:19.963 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.963 "host": "nqn.2016-06.io.spdk:host1", 00:19:19.963 "psk": "/tmp/tmp.FewabtIDrF", 00:19:19.963 "method": "nvmf_subsystem_add_host", 00:19:19.963 "req_id": 1 00:19:19.963 } 00:19:19.963 Got JSON-RPC error response 00:19:19.963 response: 00:19:19.963 { 00:19:19.963 "code": -32603, 00:19:19.963 "message": "Internal error" 00:19:19.963 } 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3418218 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3418218 ']' 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3418218 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3418218 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3418218' 00:19:19.963 killing process with pid 3418218 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3418218 00:19:19.963 01:24:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3418218 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.FewabtIDrF 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3418620 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3418620 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3418620 ']' 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.222 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.222 [2024-07-16 01:24:46.091917] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:20.222 [2024-07-16 01:24:46.091964] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.222 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.222 [2024-07-16 01:24:46.152660] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.480 [2024-07-16 01:24:46.225369] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.480 [2024-07-16 01:24:46.225406] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.480 [2024-07-16 01:24:46.225412] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.480 [2024-07-16 01:24:46.225418] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.480 [2024-07-16 01:24:46.225422] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.480 [2024-07-16 01:24:46.225456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.047 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.047 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:21.047 01:24:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.047 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:21.047 01:24:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.047 01:24:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.047 01:24:46 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.FewabtIDrF 00:19:21.047 01:24:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FewabtIDrF 00:19:21.047 01:24:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:21.306 [2024-07-16 01:24:47.062252] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.306 01:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:21.306 01:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:21.563 [2024-07-16 01:24:47.411161] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:21.563 [2024-07-16 01:24:47.411358] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.564 01:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:21.822 malloc0 00:19:21.822 01:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:21.822 01:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FewabtIDrF 00:19:22.080 [2024-07-16 01:24:47.920499] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:22.080 01:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:22.080 01:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3418963 00:19:22.080 01:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.080 01:24:47 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3418963 /var/tmp/bdevperf.sock 00:19:22.080 01:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3418963 ']' 00:19:22.080 01:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.080 01:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.080 01:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.080 01:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.080 01:24:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.080 [2024-07-16 01:24:47.969070] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:22.080 [2024-07-16 01:24:47.969118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418963 ] 00:19:22.080 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.080 [2024-07-16 01:24:48.019031] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.338 [2024-07-16 01:24:48.091695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.905 01:24:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.905 01:24:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:22.905 01:24:48 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FewabtIDrF 00:19:23.163 [2024-07-16 01:24:48.944413] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.163 [2024-07-16 01:24:48.944485] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:23.163 TLSTESTn1 00:19:23.163 01:24:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:23.421 01:24:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:23.421 "subsystems": [ 00:19:23.421 { 00:19:23.421 "subsystem": "keyring", 00:19:23.421 "config": [] 00:19:23.421 }, 00:19:23.421 { 00:19:23.421 "subsystem": "iobuf", 00:19:23.421 "config": [ 00:19:23.421 { 00:19:23.421 "method": "iobuf_set_options", 00:19:23.421 "params": { 00:19:23.421 "small_pool_count": 8192, 00:19:23.421 "large_pool_count": 1024, 00:19:23.421 "small_bufsize": 8192, 00:19:23.421 "large_bufsize": 135168 00:19:23.421 } 00:19:23.421 } 00:19:23.421 ] 00:19:23.421 }, 00:19:23.421 { 00:19:23.421 "subsystem": "sock", 00:19:23.421 "config": [ 00:19:23.421 { 00:19:23.421 "method": "sock_set_default_impl", 00:19:23.421 "params": { 00:19:23.421 "impl_name": "posix" 00:19:23.421 } 00:19:23.421 }, 00:19:23.421 { 00:19:23.421 "method": "sock_impl_set_options", 00:19:23.421 "params": { 00:19:23.421 "impl_name": "ssl", 00:19:23.421 "recv_buf_size": 4096, 00:19:23.421 "send_buf_size": 4096, 00:19:23.421 "enable_recv_pipe": true, 00:19:23.421 "enable_quickack": false, 00:19:23.421 "enable_placement_id": 0, 00:19:23.421 "enable_zerocopy_send_server": true, 00:19:23.421 "enable_zerocopy_send_client": false, 00:19:23.421 "zerocopy_threshold": 0, 00:19:23.421 "tls_version": 0, 00:19:23.421 "enable_ktls": false 00:19:23.421 } 00:19:23.421 }, 00:19:23.421 { 00:19:23.421 "method": "sock_impl_set_options", 00:19:23.421 "params": { 00:19:23.421 "impl_name": "posix", 00:19:23.421 "recv_buf_size": 2097152, 00:19:23.421 "send_buf_size": 2097152, 00:19:23.421 "enable_recv_pipe": true, 00:19:23.421 "enable_quickack": false, 00:19:23.421 "enable_placement_id": 0, 00:19:23.421 "enable_zerocopy_send_server": true, 00:19:23.421 "enable_zerocopy_send_client": false, 00:19:23.421 "zerocopy_threshold": 0, 00:19:23.421 "tls_version": 0, 00:19:23.421 "enable_ktls": false 00:19:23.421 } 00:19:23.421 } 00:19:23.421 ] 00:19:23.421 }, 00:19:23.421 { 00:19:23.421 "subsystem": "vmd", 00:19:23.421 "config": [] 00:19:23.421 }, 00:19:23.421 { 00:19:23.421 "subsystem": "accel", 00:19:23.421 "config": [ 00:19:23.421 { 00:19:23.421 "method": "accel_set_options", 00:19:23.421 "params": { 00:19:23.421 "small_cache_size": 128, 00:19:23.421 "large_cache_size": 16, 00:19:23.421 "task_count": 2048, 00:19:23.421 "sequence_count": 2048, 00:19:23.421 "buf_count": 2048 00:19:23.421 } 00:19:23.421 } 00:19:23.421 ] 00:19:23.421 }, 00:19:23.421 { 00:19:23.421 "subsystem": "bdev", 00:19:23.421 "config": [ 00:19:23.421 { 00:19:23.421 "method": "bdev_set_options", 00:19:23.421 "params": { 00:19:23.421 "bdev_io_pool_size": 65535, 00:19:23.421 "bdev_io_cache_size": 256, 00:19:23.421 "bdev_auto_examine": true, 00:19:23.421 "iobuf_small_cache_size": 128, 00:19:23.421 "iobuf_large_cache_size": 16 00:19:23.421 } 00:19:23.421 }, 00:19:23.421 { 00:19:23.421 "method": "bdev_raid_set_options", 00:19:23.421 "params": { 00:19:23.421 "process_window_size_kb": 1024 00:19:23.421 } 00:19:23.421 }, 00:19:23.421 { 00:19:23.421 "method": "bdev_iscsi_set_options", 00:19:23.421 "params": { 00:19:23.421 "timeout_sec": 30 00:19:23.421 } 00:19:23.421 }, 00:19:23.421 { 00:19:23.421 "method": "bdev_nvme_set_options", 00:19:23.421 "params": { 00:19:23.421 "action_on_timeout": "none", 00:19:23.421 "timeout_us": 0, 00:19:23.421 "timeout_admin_us": 0, 00:19:23.421 "keep_alive_timeout_ms": 10000, 00:19:23.421 "arbitration_burst": 0, 00:19:23.421 "low_priority_weight": 0, 00:19:23.421 "medium_priority_weight": 0, 00:19:23.421 "high_priority_weight": 0, 00:19:23.421 "nvme_adminq_poll_period_us": 10000, 00:19:23.421 "nvme_ioq_poll_period_us": 0, 00:19:23.422 "io_queue_requests": 0, 00:19:23.422 "delay_cmd_submit": true, 00:19:23.422 "transport_retry_count": 4, 00:19:23.422 "bdev_retry_count": 3, 00:19:23.422 "transport_ack_timeout": 0, 00:19:23.422 "ctrlr_loss_timeout_sec": 0, 00:19:23.422 "reconnect_delay_sec": 0, 00:19:23.422 "fast_io_fail_timeout_sec": 0, 00:19:23.422 "disable_auto_failback": false, 00:19:23.422 "generate_uuids": false, 00:19:23.422 "transport_tos": 0, 00:19:23.422 "nvme_error_stat": false, 00:19:23.422 "rdma_srq_size": 0, 00:19:23.422 "io_path_stat": false, 00:19:23.422 "allow_accel_sequence": false, 00:19:23.422 "rdma_max_cq_size": 0, 00:19:23.422 "rdma_cm_event_timeout_ms": 0, 00:19:23.422 "dhchap_digests": [ 00:19:23.422 "sha256", 00:19:23.422 "sha384", 00:19:23.422 "sha512" 00:19:23.422 ], 00:19:23.422 "dhchap_dhgroups": [ 00:19:23.422 "null", 00:19:23.422 "ffdhe2048", 00:19:23.422 "ffdhe3072", 00:19:23.422 "ffdhe4096", 00:19:23.422 "ffdhe6144", 00:19:23.422 "ffdhe8192" 00:19:23.422 ] 00:19:23.422 } 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "method": "bdev_nvme_set_hotplug", 00:19:23.422 "params": { 00:19:23.422 "period_us": 100000, 00:19:23.422 "enable": false 00:19:23.422 } 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "method": "bdev_malloc_create", 00:19:23.422 "params": { 00:19:23.422 "name": "malloc0", 00:19:23.422 "num_blocks": 8192, 00:19:23.422 "block_size": 4096, 00:19:23.422 "physical_block_size": 4096, 00:19:23.422 "uuid": "f7d57337-cf6a-4ce7-8051-0830e858b4d7", 00:19:23.422 "optimal_io_boundary": 0 00:19:23.422 } 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "method": "bdev_wait_for_examine" 00:19:23.422 } 00:19:23.422 ] 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "subsystem": "nbd", 00:19:23.422 "config": [] 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "subsystem": "scheduler", 00:19:23.422 "config": [ 00:19:23.422 { 00:19:23.422 "method": "framework_set_scheduler", 00:19:23.422 "params": { 00:19:23.422 "name": "static" 00:19:23.422 } 00:19:23.422 } 00:19:23.422 ] 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "subsystem": "nvmf", 00:19:23.422 "config": [ 00:19:23.422 { 00:19:23.422 "method": "nvmf_set_config", 00:19:23.422 "params": { 00:19:23.422 "discovery_filter": "match_any", 00:19:23.422 "admin_cmd_passthru": { 00:19:23.422 "identify_ctrlr": false 00:19:23.422 } 00:19:23.422 } 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "method": "nvmf_set_max_subsystems", 00:19:23.422 "params": { 00:19:23.422 "max_subsystems": 1024 00:19:23.422 } 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "method": "nvmf_set_crdt", 00:19:23.422 "params": { 00:19:23.422 "crdt1": 0, 00:19:23.422 "crdt2": 0, 00:19:23.422 "crdt3": 0 00:19:23.422 } 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "method": "nvmf_create_transport", 00:19:23.422 "params": { 00:19:23.422 "trtype": "TCP", 00:19:23.422 "max_queue_depth": 128, 00:19:23.422 "max_io_qpairs_per_ctrlr": 127, 00:19:23.422 "in_capsule_data_size": 4096, 00:19:23.422 "max_io_size": 131072, 00:19:23.422 "io_unit_size": 131072, 00:19:23.422 "max_aq_depth": 128, 00:19:23.422 "num_shared_buffers": 511, 00:19:23.422 "buf_cache_size": 4294967295, 00:19:23.422 "dif_insert_or_strip": false, 00:19:23.422 "zcopy": false, 00:19:23.422 "c2h_success": false, 00:19:23.422 "sock_priority": 0, 00:19:23.422 "abort_timeout_sec": 1, 00:19:23.422 "ack_timeout": 0, 00:19:23.422 "data_wr_pool_size": 0 00:19:23.422 } 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "method": "nvmf_create_subsystem", 00:19:23.422 "params": { 00:19:23.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.422 "allow_any_host": false, 00:19:23.422 "serial_number": "SPDK00000000000001", 00:19:23.422 "model_number": "SPDK bdev Controller", 00:19:23.422 "max_namespaces": 10, 00:19:23.422 "min_cntlid": 1, 00:19:23.422 "max_cntlid": 65519, 00:19:23.422 "ana_reporting": false 00:19:23.422 } 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "method": "nvmf_subsystem_add_host", 00:19:23.422 "params": { 00:19:23.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.422 "host": "nqn.2016-06.io.spdk:host1", 00:19:23.422 "psk": "/tmp/tmp.FewabtIDrF" 00:19:23.422 } 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "method": "nvmf_subsystem_add_ns", 00:19:23.422 "params": { 00:19:23.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.422 "namespace": { 00:19:23.422 "nsid": 1, 00:19:23.422 "bdev_name": "malloc0", 00:19:23.422 "nguid": "F7D57337CF6A4CE780510830E858B4D7", 00:19:23.422 "uuid": "f7d57337-cf6a-4ce7-8051-0830e858b4d7", 00:19:23.422 "no_auto_visible": false 00:19:23.422 } 00:19:23.422 } 00:19:23.422 }, 00:19:23.422 { 00:19:23.422 "method": "nvmf_subsystem_add_listener", 00:19:23.422 "params": { 00:19:23.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.422 "listen_address": { 00:19:23.422 "trtype": "TCP", 00:19:23.422 "adrfam": "IPv4", 00:19:23.422 "traddr": "10.0.0.2", 00:19:23.422 "trsvcid": "4420" 00:19:23.422 }, 00:19:23.422 "secure_channel": true 00:19:23.422 } 00:19:23.422 } 00:19:23.422 ] 00:19:23.422 } 00:19:23.422 ] 00:19:23.422 }' 00:19:23.422 01:24:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:23.681 01:24:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:23.681 "subsystems": [ 00:19:23.681 { 00:19:23.681 "subsystem": "keyring", 00:19:23.681 "config": [] 00:19:23.681 }, 00:19:23.681 { 00:19:23.681 "subsystem": "iobuf", 00:19:23.681 "config": [ 00:19:23.681 { 00:19:23.681 "method": "iobuf_set_options", 00:19:23.681 "params": { 00:19:23.681 "small_pool_count": 8192, 00:19:23.681 "large_pool_count": 1024, 00:19:23.681 "small_bufsize": 8192, 00:19:23.681 "large_bufsize": 135168 00:19:23.681 } 00:19:23.681 } 00:19:23.681 ] 00:19:23.681 }, 00:19:23.681 { 00:19:23.681 "subsystem": "sock", 00:19:23.681 "config": [ 00:19:23.681 { 00:19:23.681 "method": "sock_set_default_impl", 00:19:23.681 "params": { 00:19:23.681 "impl_name": "posix" 00:19:23.681 } 00:19:23.681 }, 00:19:23.681 { 00:19:23.681 "method": "sock_impl_set_options", 00:19:23.681 "params": { 00:19:23.681 "impl_name": "ssl", 00:19:23.681 "recv_buf_size": 4096, 00:19:23.681 "send_buf_size": 4096, 00:19:23.681 "enable_recv_pipe": true, 00:19:23.681 "enable_quickack": false, 00:19:23.681 "enable_placement_id": 0, 00:19:23.681 "enable_zerocopy_send_server": true, 00:19:23.681 "enable_zerocopy_send_client": false, 00:19:23.681 "zerocopy_threshold": 0, 00:19:23.681 "tls_version": 0, 00:19:23.681 "enable_ktls": false 00:19:23.681 } 00:19:23.681 }, 00:19:23.681 { 00:19:23.681 "method": "sock_impl_set_options", 00:19:23.681 "params": { 00:19:23.681 "impl_name": "posix", 00:19:23.681 "recv_buf_size": 2097152, 00:19:23.681 "send_buf_size": 2097152, 00:19:23.681 "enable_recv_pipe": true, 00:19:23.681 "enable_quickack": false, 00:19:23.681 "enable_placement_id": 0, 00:19:23.681 "enable_zerocopy_send_server": true, 00:19:23.681 "enable_zerocopy_send_client": false, 00:19:23.681 "zerocopy_threshold": 0, 00:19:23.681 "tls_version": 0, 00:19:23.681 "enable_ktls": false 00:19:23.681 } 00:19:23.681 } 00:19:23.681 ] 00:19:23.681 }, 00:19:23.681 { 00:19:23.681 "subsystem": "vmd", 00:19:23.681 "config": [] 00:19:23.681 }, 00:19:23.681 { 00:19:23.681 "subsystem": "accel", 00:19:23.681 "config": [ 00:19:23.681 { 00:19:23.681 "method": "accel_set_options", 00:19:23.681 "params": { 00:19:23.681 "small_cache_size": 128, 00:19:23.681 "large_cache_size": 16, 00:19:23.681 "task_count": 2048, 00:19:23.681 "sequence_count": 2048, 00:19:23.681 "buf_count": 2048 00:19:23.681 } 00:19:23.681 } 00:19:23.681 ] 00:19:23.681 }, 00:19:23.681 { 00:19:23.681 "subsystem": "bdev", 00:19:23.681 "config": [ 00:19:23.681 { 00:19:23.681 "method": "bdev_set_options", 00:19:23.681 "params": { 00:19:23.681 "bdev_io_pool_size": 65535, 00:19:23.681 "bdev_io_cache_size": 256, 00:19:23.681 "bdev_auto_examine": true, 00:19:23.681 "iobuf_small_cache_size": 128, 00:19:23.681 "iobuf_large_cache_size": 16 00:19:23.681 } 00:19:23.681 }, 00:19:23.681 { 00:19:23.681 "method": "bdev_raid_set_options", 00:19:23.681 "params": { 00:19:23.681 "process_window_size_kb": 1024 00:19:23.681 } 00:19:23.681 }, 00:19:23.681 { 00:19:23.681 "method": "bdev_iscsi_set_options", 00:19:23.681 "params": { 00:19:23.681 "timeout_sec": 30 00:19:23.681 } 00:19:23.681 }, 00:19:23.681 { 00:19:23.681 "method": "bdev_nvme_set_options", 00:19:23.681 "params": { 00:19:23.681 "action_on_timeout": "none", 00:19:23.681 "timeout_us": 0, 00:19:23.681 "timeout_admin_us": 0, 00:19:23.681 "keep_alive_timeout_ms": 10000, 00:19:23.681 "arbitration_burst": 0, 00:19:23.681 "low_priority_weight": 0, 00:19:23.681 "medium_priority_weight": 0, 00:19:23.681 "high_priority_weight": 0, 00:19:23.681 "nvme_adminq_poll_period_us": 10000, 00:19:23.681 "nvme_ioq_poll_period_us": 0, 00:19:23.681 "io_queue_requests": 512, 00:19:23.681 "delay_cmd_submit": true, 00:19:23.681 "transport_retry_count": 4, 00:19:23.681 "bdev_retry_count": 3, 00:19:23.681 "transport_ack_timeout": 0, 00:19:23.681 "ctrlr_loss_timeout_sec": 0, 00:19:23.681 "reconnect_delay_sec": 0, 00:19:23.681 "fast_io_fail_timeout_sec": 0, 00:19:23.681 "disable_auto_failback": false, 00:19:23.681 "generate_uuids": false, 00:19:23.681 "transport_tos": 0, 00:19:23.681 "nvme_error_stat": false, 00:19:23.681 "rdma_srq_size": 0, 00:19:23.682 "io_path_stat": false, 00:19:23.682 "allow_accel_sequence": false, 00:19:23.682 "rdma_max_cq_size": 0, 00:19:23.682 "rdma_cm_event_timeout_ms": 0, 00:19:23.682 "dhchap_digests": [ 00:19:23.682 "sha256", 00:19:23.682 "sha384", 00:19:23.682 "sha512" 00:19:23.682 ], 00:19:23.682 "dhchap_dhgroups": [ 00:19:23.682 "null", 00:19:23.682 "ffdhe2048", 00:19:23.682 "ffdhe3072", 00:19:23.682 "ffdhe4096", 00:19:23.682 "ffdhe6144", 00:19:23.682 "ffdhe8192" 00:19:23.682 ] 00:19:23.682 } 00:19:23.682 }, 00:19:23.682 { 00:19:23.682 "method": "bdev_nvme_attach_controller", 00:19:23.682 "params": { 00:19:23.682 "name": "TLSTEST", 00:19:23.682 "trtype": "TCP", 00:19:23.682 "adrfam": "IPv4", 00:19:23.682 "traddr": "10.0.0.2", 00:19:23.682 "trsvcid": "4420", 00:19:23.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.682 "prchk_reftag": false, 00:19:23.682 "prchk_guard": false, 00:19:23.682 "ctrlr_loss_timeout_sec": 0, 00:19:23.682 "reconnect_delay_sec": 0, 00:19:23.682 "fast_io_fail_timeout_sec": 0, 00:19:23.682 "psk": "/tmp/tmp.FewabtIDrF", 00:19:23.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.682 "hdgst": false, 00:19:23.682 "ddgst": false 00:19:23.682 } 00:19:23.682 }, 00:19:23.682 { 00:19:23.682 "method": "bdev_nvme_set_hotplug", 00:19:23.682 "params": { 00:19:23.682 "period_us": 100000, 00:19:23.682 "enable": false 00:19:23.682 } 00:19:23.682 }, 00:19:23.682 { 00:19:23.682 "method": "bdev_wait_for_examine" 00:19:23.682 } 00:19:23.682 ] 00:19:23.682 }, 00:19:23.682 { 00:19:23.682 "subsystem": "nbd", 00:19:23.682 "config": [] 00:19:23.682 } 00:19:23.682 ] 00:19:23.682 }' 00:19:23.682 01:24:49 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3418963 00:19:23.682 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3418963 ']' 00:19:23.682 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3418963 00:19:23.682 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:23.682 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.682 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3418963 00:19:23.682 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:23.682 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:23.682 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3418963' 00:19:23.682 killing process with pid 3418963 00:19:23.682 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3418963 00:19:23.682 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.682 00:19:23.682 Latency(us) 00:19:23.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.682 =================================================================================================================== 00:19:23.682 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.682 [2024-07-16 01:24:49.554483] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:23.682 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3418963 00:19:23.940 01:24:49 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3418620 00:19:23.940 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3418620 ']' 00:19:23.940 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3418620 00:19:23.940 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:23.940 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:23.940 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3418620 00:19:23.940 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:23.940 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:23.940 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3418620' 00:19:23.940 killing process with pid 3418620 00:19:23.940 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3418620 00:19:23.940 [2024-07-16 01:24:49.777641] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:23.940 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3418620 00:19:24.199 01:24:49 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:24.199 01:24:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.199 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.199 01:24:49 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:24.199 "subsystems": [ 00:19:24.199 { 00:19:24.199 "subsystem": "keyring", 00:19:24.199 "config": [] 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "subsystem": "iobuf", 00:19:24.199 "config": [ 00:19:24.199 { 00:19:24.199 "method": "iobuf_set_options", 00:19:24.199 "params": { 00:19:24.199 "small_pool_count": 8192, 00:19:24.199 "large_pool_count": 1024, 00:19:24.199 "small_bufsize": 8192, 00:19:24.199 "large_bufsize": 135168 00:19:24.199 } 00:19:24.199 } 00:19:24.199 ] 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "subsystem": "sock", 00:19:24.199 "config": [ 00:19:24.199 { 00:19:24.199 "method": "sock_set_default_impl", 00:19:24.199 "params": { 00:19:24.199 "impl_name": "posix" 00:19:24.199 } 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "method": "sock_impl_set_options", 00:19:24.199 "params": { 00:19:24.199 "impl_name": "ssl", 00:19:24.199 "recv_buf_size": 4096, 00:19:24.199 "send_buf_size": 4096, 00:19:24.199 "enable_recv_pipe": true, 00:19:24.199 "enable_quickack": false, 00:19:24.199 "enable_placement_id": 0, 00:19:24.199 "enable_zerocopy_send_server": true, 00:19:24.199 "enable_zerocopy_send_client": false, 00:19:24.199 "zerocopy_threshold": 0, 00:19:24.199 "tls_version": 0, 00:19:24.199 "enable_ktls": false 00:19:24.199 } 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "method": "sock_impl_set_options", 00:19:24.199 "params": { 00:19:24.199 "impl_name": "posix", 00:19:24.199 "recv_buf_size": 2097152, 00:19:24.199 "send_buf_size": 2097152, 00:19:24.199 "enable_recv_pipe": true, 00:19:24.199 "enable_quickack": false, 00:19:24.199 "enable_placement_id": 0, 00:19:24.199 "enable_zerocopy_send_server": true, 00:19:24.199 "enable_zerocopy_send_client": false, 00:19:24.199 "zerocopy_threshold": 0, 00:19:24.199 "tls_version": 0, 00:19:24.199 "enable_ktls": false 00:19:24.199 } 00:19:24.199 } 00:19:24.199 ] 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "subsystem": "vmd", 00:19:24.199 "config": [] 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "subsystem": "accel", 00:19:24.199 "config": [ 00:19:24.199 { 00:19:24.199 "method": "accel_set_options", 00:19:24.199 "params": { 00:19:24.199 "small_cache_size": 128, 00:19:24.199 "large_cache_size": 16, 00:19:24.199 "task_count": 2048, 00:19:24.199 "sequence_count": 2048, 00:19:24.199 "buf_count": 2048 00:19:24.199 } 00:19:24.199 } 00:19:24.199 ] 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "subsystem": "bdev", 00:19:24.199 "config": [ 00:19:24.199 { 00:19:24.199 "method": "bdev_set_options", 00:19:24.199 "params": { 00:19:24.199 "bdev_io_pool_size": 65535, 00:19:24.199 "bdev_io_cache_size": 256, 00:19:24.199 "bdev_auto_examine": true, 00:19:24.199 "iobuf_small_cache_size": 128, 00:19:24.199 "iobuf_large_cache_size": 16 00:19:24.199 } 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "method": "bdev_raid_set_options", 00:19:24.199 "params": { 00:19:24.199 "process_window_size_kb": 1024 00:19:24.199 } 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "method": "bdev_iscsi_set_options", 00:19:24.199 "params": { 00:19:24.199 "timeout_sec": 30 00:19:24.199 } 00:19:24.199 }, 00:19:24.199 { 00:19:24.199 "method": "bdev_nvme_set_options", 00:19:24.199 "params": { 00:19:24.199 "action_on_timeout": "none", 00:19:24.199 "timeout_us": 0, 00:19:24.199 "timeout_admin_us": 0, 00:19:24.199 "keep_alive_timeout_ms": 10000, 00:19:24.199 "arbitration_burst": 0, 00:19:24.199 "low_priority_weight": 0, 00:19:24.199 "medium_priority_weight": 0, 00:19:24.199 "high_priority_weight": 0, 00:19:24.199 "nvme_adminq_poll_period_us": 10000, 00:19:24.199 "nvme_ioq_poll_period_us": 0, 00:19:24.199 "io_queue_requests": 0, 00:19:24.199 "delay_cmd_submit": true, 00:19:24.199 "transport_retry_count": 4, 00:19:24.199 "bdev_retry_count": 3, 00:19:24.199 "transport_ack_timeout": 0, 00:19:24.199 "ctrlr_loss_timeout_sec": 0, 00:19:24.199 "reconnect_delay_sec": 0, 00:19:24.199 "fast_io_fail_timeout_sec": 0, 00:19:24.200 "disable_auto_failback": false, 00:19:24.200 "generate_uuids": false, 00:19:24.200 "transport_tos": 0, 00:19:24.200 "nvme_error_stat": false, 00:19:24.200 "rdma_srq_size": 0, 00:19:24.200 "io_path_stat": false, 00:19:24.200 "allow_accel_sequence": false, 00:19:24.200 "rdma_max_cq_size": 0, 00:19:24.200 "rdma_cm_event_timeout_ms": 0, 00:19:24.200 "dhchap_digests": [ 00:19:24.200 "sha256", 00:19:24.200 "sha384", 00:19:24.200 "sha512" 00:19:24.200 ], 00:19:24.200 "dhchap_dhgroups": [ 00:19:24.200 "null", 00:19:24.200 "ffdhe2048", 00:19:24.200 "ffdhe3072", 00:19:24.200 "ffdhe4096", 00:19:24.200 "ffdhe6144", 00:19:24.200 "ffdhe8192" 00:19:24.200 ] 00:19:24.200 } 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "method": "bdev_nvme_set_hotplug", 00:19:24.200 "params": { 00:19:24.200 "period_us": 100000, 00:19:24.200 "enable": false 00:19:24.200 } 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "method": "bdev_malloc_create", 00:19:24.200 "params": { 00:19:24.200 "name": "malloc0", 00:19:24.200 "num_blocks": 8192, 00:19:24.200 "block_size": 4096, 00:19:24.200 "physical_block_size": 4096, 00:19:24.200 "uuid": "f7d57337-cf6a-4ce7-8051-0830e858b4d7", 00:19:24.200 "optimal_io_boundary": 0 00:19:24.200 } 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "method": "bdev_wait_for_examine" 00:19:24.200 } 00:19:24.200 ] 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "subsystem": "nbd", 00:19:24.200 "config": [] 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "subsystem": "scheduler", 00:19:24.200 "config": [ 00:19:24.200 { 00:19:24.200 "method": "framework_set_scheduler", 00:19:24.200 "params": { 00:19:24.200 "name": "static" 00:19:24.200 } 00:19:24.200 } 00:19:24.200 ] 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "subsystem": "nvmf", 00:19:24.200 "config": [ 00:19:24.200 { 00:19:24.200 "method": "nvmf_set_config", 00:19:24.200 "params": { 00:19:24.200 "discovery_filter": "match_any", 00:19:24.200 "admin_cmd_passthru": { 00:19:24.200 "identify_ctrlr": false 00:19:24.200 } 00:19:24.200 } 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "method": "nvmf_set_max_subsystems", 00:19:24.200 "params": { 00:19:24.200 "max_subsystems": 1024 00:19:24.200 } 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "method": "nvmf_set_crdt", 00:19:24.200 "params": { 00:19:24.200 "crdt1": 0, 00:19:24.200 "crdt2": 0, 00:19:24.200 "crdt3": 0 00:19:24.200 } 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "method": "nvmf_create_transport", 00:19:24.200 "params": { 00:19:24.200 "trtype": "TCP", 00:19:24.200 "max_queue_depth": 128, 00:19:24.200 "max_io_qpairs_per_ctrlr": 127, 00:19:24.200 "in_capsule_data_size": 4096, 00:19:24.200 "max_io_size": 131072, 00:19:24.200 "io_unit_size": 131072, 00:19:24.200 "max_aq_depth": 128, 00:19:24.200 "num_shared_buffers": 511, 00:19:24.200 "buf_cache_size": 4294967295, 00:19:24.200 "dif_insert_or_strip": false, 00:19:24.200 "zcopy": false, 00:19:24.200 "c2h_success": false, 00:19:24.200 "sock_priority": 0, 00:19:24.200 "abort_timeout_sec": 1, 00:19:24.200 "ack_timeout": 0, 00:19:24.200 "data_wr_pool_size": 0 00:19:24.200 } 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "method": "nvmf_create_subsystem", 00:19:24.200 "params": { 00:19:24.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.200 "allow_any_host": false, 00:19:24.200 "serial_number": "SPDK00000000000001", 00:19:24.200 "model_number": "SPDK bdev Controller", 00:19:24.200 "max_namespaces": 10, 00:19:24.200 "min_cntlid": 1, 00:19:24.200 "max_cntlid": 65519, 00:19:24.200 "ana_reporting": false 00:19:24.200 } 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "method": "nvmf_subsystem_add_host", 00:19:24.200 "params": { 00:19:24.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.200 "host": "nqn.2016-06.io.spdk:host1", 00:19:24.200 "psk": "/tmp/tmp.FewabtIDrF" 00:19:24.200 } 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "method": "nvmf_subsystem_add_ns", 00:19:24.200 "params": { 00:19:24.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.200 "namespace": { 00:19:24.200 "nsid": 1, 00:19:24.200 "bdev_name": "malloc0", 00:19:24.200 "nguid": "F7D57337CF6A4CE780510830E858B4D7", 00:19:24.200 "uuid": "f7d57337-cf6a-4ce7-8051-0830e858b4d7", 00:19:24.200 "no_auto_visible": false 00:19:24.200 } 00:19:24.200 } 00:19:24.200 }, 00:19:24.200 { 00:19:24.200 "method": "nvmf_subsystem_add_listener", 00:19:24.200 "params": { 00:19:24.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.200 "listen_address": { 00:19:24.200 "trtype": "TCP", 00:19:24.200 "adrfam": "IPv4", 00:19:24.200 "traddr": "10.0.0.2", 00:19:24.200 "trsvcid": "4420" 00:19:24.200 }, 00:19:24.200 "secure_channel": true 00:19:24.200 } 00:19:24.200 } 00:19:24.200 ] 00:19:24.200 } 00:19:24.200 ] 00:19:24.200 }' 00:19:24.200 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.200 01:24:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3419222 00:19:24.200 01:24:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:24.200 01:24:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3419222 00:19:24.200 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3419222 ']' 00:19:24.200 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.200 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.200 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.200 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.200 01:24:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.200 [2024-07-16 01:24:50.022660] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:24.200 [2024-07-16 01:24:50.022713] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.200 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.200 [2024-07-16 01:24:50.083043] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.200 [2024-07-16 01:24:50.161138] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.200 [2024-07-16 01:24:50.161176] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.200 [2024-07-16 01:24:50.161183] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.200 [2024-07-16 01:24:50.161188] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.200 [2024-07-16 01:24:50.161194] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.200 [2024-07-16 01:24:50.161243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.459 [2024-07-16 01:24:50.364420] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.459 [2024-07-16 01:24:50.380391] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:24.459 [2024-07-16 01:24:50.396439] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:24.459 [2024-07-16 01:24:50.409674] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3419455 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3419455 /var/tmp/bdevperf.sock 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3419455 ']' 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.027 01:24:50 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:25.027 "subsystems": [ 00:19:25.027 { 00:19:25.027 "subsystem": "keyring", 00:19:25.027 "config": [] 00:19:25.027 }, 00:19:25.027 { 00:19:25.027 "subsystem": "iobuf", 00:19:25.027 "config": [ 00:19:25.027 { 00:19:25.027 "method": "iobuf_set_options", 00:19:25.027 "params": { 00:19:25.027 "small_pool_count": 8192, 00:19:25.027 "large_pool_count": 1024, 00:19:25.027 "small_bufsize": 8192, 00:19:25.027 "large_bufsize": 135168 00:19:25.027 } 00:19:25.027 } 00:19:25.027 ] 00:19:25.027 }, 00:19:25.027 { 00:19:25.027 "subsystem": "sock", 00:19:25.027 "config": [ 00:19:25.027 { 00:19:25.027 "method": "sock_set_default_impl", 00:19:25.027 "params": { 00:19:25.027 "impl_name": "posix" 00:19:25.027 } 00:19:25.027 }, 00:19:25.027 { 00:19:25.027 "method": "sock_impl_set_options", 00:19:25.027 "params": { 00:19:25.027 "impl_name": "ssl", 00:19:25.027 "recv_buf_size": 4096, 00:19:25.027 "send_buf_size": 4096, 00:19:25.027 "enable_recv_pipe": true, 00:19:25.027 "enable_quickack": false, 00:19:25.027 "enable_placement_id": 0, 00:19:25.027 "enable_zerocopy_send_server": true, 00:19:25.027 "enable_zerocopy_send_client": false, 00:19:25.027 "zerocopy_threshold": 0, 00:19:25.027 "tls_version": 0, 00:19:25.027 "enable_ktls": false 00:19:25.027 } 00:19:25.027 }, 00:19:25.027 { 00:19:25.027 "method": "sock_impl_set_options", 00:19:25.027 "params": { 00:19:25.027 "impl_name": "posix", 00:19:25.027 "recv_buf_size": 2097152, 00:19:25.027 "send_buf_size": 2097152, 00:19:25.027 "enable_recv_pipe": true, 00:19:25.027 "enable_quickack": false, 00:19:25.027 "enable_placement_id": 0, 00:19:25.027 "enable_zerocopy_send_server": true, 00:19:25.027 "enable_zerocopy_send_client": false, 00:19:25.027 "zerocopy_threshold": 0, 00:19:25.027 "tls_version": 0, 00:19:25.027 "enable_ktls": false 00:19:25.027 } 00:19:25.027 } 00:19:25.027 ] 00:19:25.027 }, 00:19:25.027 { 00:19:25.027 "subsystem": "vmd", 00:19:25.027 "config": [] 00:19:25.027 }, 00:19:25.027 { 00:19:25.027 "subsystem": "accel", 00:19:25.027 "config": [ 00:19:25.027 { 00:19:25.027 "method": "accel_set_options", 00:19:25.027 "params": { 00:19:25.027 "small_cache_size": 128, 00:19:25.027 "large_cache_size": 16, 00:19:25.027 "task_count": 2048, 00:19:25.027 "sequence_count": 2048, 00:19:25.027 "buf_count": 2048 00:19:25.027 } 00:19:25.027 } 00:19:25.027 ] 00:19:25.027 }, 00:19:25.027 { 00:19:25.027 "subsystem": "bdev", 00:19:25.027 "config": [ 00:19:25.027 { 00:19:25.027 "method": "bdev_set_options", 00:19:25.027 "params": { 00:19:25.027 "bdev_io_pool_size": 65535, 00:19:25.027 "bdev_io_cache_size": 256, 00:19:25.027 "bdev_auto_examine": true, 00:19:25.027 "iobuf_small_cache_size": 128, 00:19:25.028 "iobuf_large_cache_size": 16 00:19:25.028 } 00:19:25.028 }, 00:19:25.028 { 00:19:25.028 "method": "bdev_raid_set_options", 00:19:25.028 "params": { 00:19:25.028 "process_window_size_kb": 1024 00:19:25.028 } 00:19:25.028 }, 00:19:25.028 { 00:19:25.028 "method": "bdev_iscsi_set_options", 00:19:25.028 "params": { 00:19:25.028 "timeout_sec": 30 00:19:25.028 } 00:19:25.028 }, 00:19:25.028 { 00:19:25.028 "method": "bdev_nvme_set_options", 00:19:25.028 "params": { 00:19:25.028 "action_on_timeout": "none", 00:19:25.028 "timeout_us": 0, 00:19:25.028 "timeout_admin_us": 0, 00:19:25.028 "keep_alive_timeout_ms": 10000, 00:19:25.028 "arbitration_burst": 0, 00:19:25.028 "low_priority_weight": 0, 00:19:25.028 "medium_priority_weight": 0, 00:19:25.028 "high_priority_weight": 0, 00:19:25.028 "nvme_adminq_poll_period_us": 10000, 00:19:25.028 "nvme_ioq_poll_period_us": 0, 00:19:25.028 "io_queue_requests": 512, 00:19:25.028 "delay_cmd_submit": true, 00:19:25.028 "transport_retry_count": 4, 00:19:25.028 "bdev_retry_count": 3, 00:19:25.028 "transport_ack_timeout": 0, 00:19:25.028 "ctrlr_loss_timeout_sec": 0, 00:19:25.028 "reconnect_delay_sec": 0, 00:19:25.028 "fast_io_fail_timeout_sec": 0, 00:19:25.028 "disable_auto_failback": false, 00:19:25.028 "generate_uuids": false, 00:19:25.028 "transport_tos": 0, 00:19:25.028 "nvme_error_stat": false, 00:19:25.028 "rdma_srq_size": 0, 00:19:25.028 "io_path_stat": false, 00:19:25.028 "allow_accel_sequence": false, 00:19:25.028 "rdma_max_cq_size": 0, 00:19:25.028 "rdma_cm_event_timeout_ms": 0, 00:19:25.028 "dhchap_digests": [ 00:19:25.028 "sha256", 00:19:25.028 "sha384", 00:19:25.028 "sha512" 00:19:25.028 ], 00:19:25.028 "dhchap_dhgroups": [ 00:19:25.028 "null", 00:19:25.028 "ffdhe2048", 00:19:25.028 "ffdhe3072", 00:19:25.028 "ffdhe4096", 00:19:25.028 "ffdhe6144", 00:19:25.028 "ffdhe8192" 00:19:25.028 ] 00:19:25.028 } 00:19:25.028 }, 00:19:25.028 { 00:19:25.028 "method": "bdev_nvme_attach_controller", 00:19:25.028 "params": { 00:19:25.028 "name": "TLSTEST", 00:19:25.028 "trtype": "TCP", 00:19:25.028 "adrfam": "IPv4", 00:19:25.028 "traddr": "10.0.0.2", 00:19:25.028 "trsvcid": "4420", 00:19:25.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.028 "prchk_reftag": false, 00:19:25.028 "prchk_guard": false, 00:19:25.028 "ctrlr_loss_timeout_sec": 0, 00:19:25.028 "reconnect_delay_sec": 0, 00:19:25.028 "fast_io_fail_timeout_sec": 0, 00:19:25.028 "psk": "/tmp/tmp.FewabtIDrF", 00:19:25.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.028 "hdgst": false, 00:19:25.028 "ddgst": false 00:19:25.028 } 00:19:25.028 }, 00:19:25.028 { 00:19:25.028 "method": "bdev_nvme_set_hotplug", 00:19:25.028 "params": { 00:19:25.028 "period_us": 100000, 00:19:25.028 "enable": false 00:19:25.028 } 00:19:25.028 }, 00:19:25.028 { 00:19:25.028 "method": "bdev_wait_for_examine" 00:19:25.028 } 00:19:25.028 ] 00:19:25.028 }, 00:19:25.028 { 00:19:25.028 "subsystem": "nbd", 00:19:25.028 "config": [] 00:19:25.028 } 00:19:25.028 ] 00:19:25.028 }' 00:19:25.028 01:24:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.028 01:24:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.028 [2024-07-16 01:24:50.904966] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:25.028 [2024-07-16 01:24:50.905013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419455 ] 00:19:25.028 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.028 [2024-07-16 01:24:50.955758] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.287 [2024-07-16 01:24:51.027284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.287 [2024-07-16 01:24:51.169018] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.287 [2024-07-16 01:24:51.169095] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:25.854 01:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.854 01:24:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:25.854 01:24:51 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:25.854 Running I/O for 10 seconds... 00:19:38.060 00:19:38.060 Latency(us) 00:19:38.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.060 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:38.060 Verification LBA range: start 0x0 length 0x2000 00:19:38.060 TLSTESTn1 : 10.01 5215.38 20.37 0.00 0.00 24508.21 5554.96 32206.26 00:19:38.061 =================================================================================================================== 00:19:38.061 Total : 5215.38 20.37 0.00 0.00 24508.21 5554.96 32206.26 00:19:38.061 0 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3419455 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3419455 ']' 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3419455 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3419455 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3419455' 00:19:38.061 killing process with pid 3419455 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3419455 00:19:38.061 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.061 00:19:38.061 Latency(us) 00:19:38.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.061 =================================================================================================================== 00:19:38.061 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:38.061 [2024-07-16 01:25:01.887317] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:38.061 01:25:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3419455 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3419222 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3419222 ']' 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3419222 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3419222 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3419222' 00:19:38.061 killing process with pid 3419222 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3419222 00:19:38.061 [2024-07-16 01:25:02.113317] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3419222 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3421302 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3421302 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3421302 ']' 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.061 01:25:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.061 [2024-07-16 01:25:02.356397] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:38.061 [2024-07-16 01:25:02.356442] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.061 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.061 [2024-07-16 01:25:02.417619] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.061 [2024-07-16 01:25:02.512665] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.061 [2024-07-16 01:25:02.512703] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.061 [2024-07-16 01:25:02.512713] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.061 [2024-07-16 01:25:02.512719] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.061 [2024-07-16 01:25:02.512724] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.061 [2024-07-16 01:25:02.512741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.FewabtIDrF 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FewabtIDrF 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:38.061 [2024-07-16 01:25:03.378462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:38.061 [2024-07-16 01:25:03.715321] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:38.061 [2024-07-16 01:25:03.715512] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:38.061 malloc0 00:19:38.061 01:25:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:38.371 01:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FewabtIDrF 00:19:38.371 [2024-07-16 01:25:04.236915] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:38.371 01:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:38.371 01:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3421595 00:19:38.371 01:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.371 01:25:04 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3421595 /var/tmp/bdevperf.sock 00:19:38.371 01:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3421595 ']' 00:19:38.371 01:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.371 01:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.371 01:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.371 01:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.371 01:25:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.371 [2024-07-16 01:25:04.299935] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:38.371 [2024-07-16 01:25:04.299985] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421595 ] 00:19:38.371 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.642 [2024-07-16 01:25:04.356741] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.642 [2024-07-16 01:25:04.429459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.209 01:25:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:39.209 01:25:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:39.209 01:25:05 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FewabtIDrF 00:19:39.467 01:25:05 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:39.467 [2024-07-16 01:25:05.411246] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.726 nvme0n1 00:19:39.726 01:25:05 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:39.726 Running I/O for 1 seconds... 00:19:40.661 00:19:40.661 Latency(us) 00:19:40.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.661 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:40.661 Verification LBA range: start 0x0 length 0x2000 00:19:40.661 nvme0n1 : 1.01 5423.96 21.19 0.00 0.00 23434.94 5710.99 43940.33 00:19:40.661 =================================================================================================================== 00:19:40.661 Total : 5423.96 21.19 0.00 0.00 23434.94 5710.99 43940.33 00:19:40.661 0 00:19:40.661 01:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3421595 00:19:40.661 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3421595 ']' 00:19:40.661 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3421595 00:19:40.661 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:40.661 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.661 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3421595 00:19:40.661 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:40.661 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:40.661 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3421595' 00:19:40.661 killing process with pid 3421595 00:19:40.661 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3421595 00:19:40.661 Received shutdown signal, test time was about 1.000000 seconds 00:19:40.661 00:19:40.661 Latency(us) 00:19:40.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.661 =================================================================================================================== 00:19:40.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.661 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3421595 00:19:40.920 01:25:06 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3421302 00:19:40.920 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3421302 ']' 00:19:40.920 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3421302 00:19:40.920 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:40.920 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.920 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3421302 00:19:40.920 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.920 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.920 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3421302' 00:19:40.920 killing process with pid 3421302 00:19:40.920 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3421302 00:19:40.920 [2024-07-16 01:25:06.868468] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:40.920 01:25:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3421302 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3422052 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3422052 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3422052 ']' 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.179 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.179 [2024-07-16 01:25:07.116847] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:41.179 [2024-07-16 01:25:07.116892] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.179 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.437 [2024-07-16 01:25:07.175659] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.437 [2024-07-16 01:25:07.253883] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.437 [2024-07-16 01:25:07.253922] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.437 [2024-07-16 01:25:07.253929] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.437 [2024-07-16 01:25:07.253935] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.437 [2024-07-16 01:25:07.253940] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.437 [2024-07-16 01:25:07.253958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.004 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.004 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:42.004 01:25:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:42.004 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:42.004 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.004 01:25:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.004 01:25:07 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:42.004 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.004 01:25:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.004 [2024-07-16 01:25:07.956566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.004 malloc0 00:19:42.004 [2024-07-16 01:25:07.984808] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:42.004 [2024-07-16 01:25:07.985014] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.263 01:25:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.263 01:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3422280 00:19:42.263 01:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3422280 /var/tmp/bdevperf.sock 00:19:42.263 01:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:42.263 01:25:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3422280 ']' 00:19:42.263 01:25:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.263 01:25:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.263 01:25:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.263 01:25:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.263 01:25:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.263 [2024-07-16 01:25:08.056985] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:42.263 [2024-07-16 01:25:08.057029] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422280 ] 00:19:42.263 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.263 [2024-07-16 01:25:08.111258] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.263 [2024-07-16 01:25:08.194433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.199 01:25:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.199 01:25:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:43.199 01:25:08 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FewabtIDrF 00:19:43.199 01:25:09 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:43.457 [2024-07-16 01:25:09.206201] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.457 nvme0n1 00:19:43.457 01:25:09 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:43.457 Running I/O for 1 seconds... 00:19:44.838 00:19:44.838 Latency(us) 00:19:44.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.838 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:44.838 Verification LBA range: start 0x0 length 0x2000 00:19:44.838 nvme0n1 : 1.02 5403.69 21.11 0.00 0.00 23501.03 5929.45 39696.09 00:19:44.838 =================================================================================================================== 00:19:44.838 Total : 5403.69 21.11 0.00 0.00 23501.03 5929.45 39696.09 00:19:44.838 0 00:19:44.838 01:25:10 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:44.838 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.838 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.838 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.838 01:25:10 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:44.838 "subsystems": [ 00:19:44.838 { 00:19:44.838 "subsystem": "keyring", 00:19:44.838 "config": [ 00:19:44.838 { 00:19:44.838 "method": "keyring_file_add_key", 00:19:44.838 "params": { 00:19:44.838 "name": "key0", 00:19:44.838 "path": "/tmp/tmp.FewabtIDrF" 00:19:44.838 } 00:19:44.838 } 00:19:44.838 ] 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "subsystem": "iobuf", 00:19:44.838 "config": [ 00:19:44.838 { 00:19:44.838 "method": "iobuf_set_options", 00:19:44.838 "params": { 00:19:44.838 "small_pool_count": 8192, 00:19:44.838 "large_pool_count": 1024, 00:19:44.838 "small_bufsize": 8192, 00:19:44.838 "large_bufsize": 135168 00:19:44.838 } 00:19:44.838 } 00:19:44.838 ] 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "subsystem": "sock", 00:19:44.838 "config": [ 00:19:44.838 { 00:19:44.838 "method": "sock_set_default_impl", 00:19:44.838 "params": { 00:19:44.838 "impl_name": "posix" 00:19:44.838 } 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "method": "sock_impl_set_options", 00:19:44.838 "params": { 00:19:44.838 "impl_name": "ssl", 00:19:44.838 "recv_buf_size": 4096, 00:19:44.838 "send_buf_size": 4096, 00:19:44.838 "enable_recv_pipe": true, 00:19:44.838 "enable_quickack": false, 00:19:44.838 "enable_placement_id": 0, 00:19:44.838 "enable_zerocopy_send_server": true, 00:19:44.838 "enable_zerocopy_send_client": false, 00:19:44.838 "zerocopy_threshold": 0, 00:19:44.838 "tls_version": 0, 00:19:44.838 "enable_ktls": false 00:19:44.838 } 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "method": "sock_impl_set_options", 00:19:44.838 "params": { 00:19:44.838 "impl_name": "posix", 00:19:44.838 "recv_buf_size": 2097152, 00:19:44.838 "send_buf_size": 2097152, 00:19:44.838 "enable_recv_pipe": true, 00:19:44.838 "enable_quickack": false, 00:19:44.838 "enable_placement_id": 0, 00:19:44.838 "enable_zerocopy_send_server": true, 00:19:44.838 "enable_zerocopy_send_client": false, 00:19:44.838 "zerocopy_threshold": 0, 00:19:44.838 "tls_version": 0, 00:19:44.838 "enable_ktls": false 00:19:44.838 } 00:19:44.838 } 00:19:44.838 ] 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "subsystem": "vmd", 00:19:44.838 "config": [] 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "subsystem": "accel", 00:19:44.838 "config": [ 00:19:44.838 { 00:19:44.838 "method": "accel_set_options", 00:19:44.838 "params": { 00:19:44.838 "small_cache_size": 128, 00:19:44.838 "large_cache_size": 16, 00:19:44.838 "task_count": 2048, 00:19:44.838 "sequence_count": 2048, 00:19:44.838 "buf_count": 2048 00:19:44.838 } 00:19:44.838 } 00:19:44.838 ] 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "subsystem": "bdev", 00:19:44.838 "config": [ 00:19:44.838 { 00:19:44.838 "method": "bdev_set_options", 00:19:44.838 "params": { 00:19:44.838 "bdev_io_pool_size": 65535, 00:19:44.838 "bdev_io_cache_size": 256, 00:19:44.838 "bdev_auto_examine": true, 00:19:44.838 "iobuf_small_cache_size": 128, 00:19:44.838 "iobuf_large_cache_size": 16 00:19:44.838 } 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "method": "bdev_raid_set_options", 00:19:44.838 "params": { 00:19:44.838 "process_window_size_kb": 1024 00:19:44.838 } 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "method": "bdev_iscsi_set_options", 00:19:44.838 "params": { 00:19:44.838 "timeout_sec": 30 00:19:44.838 } 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "method": "bdev_nvme_set_options", 00:19:44.838 "params": { 00:19:44.838 "action_on_timeout": "none", 00:19:44.838 "timeout_us": 0, 00:19:44.838 "timeout_admin_us": 0, 00:19:44.838 "keep_alive_timeout_ms": 10000, 00:19:44.838 "arbitration_burst": 0, 00:19:44.838 "low_priority_weight": 0, 00:19:44.838 "medium_priority_weight": 0, 00:19:44.838 "high_priority_weight": 0, 00:19:44.838 "nvme_adminq_poll_period_us": 10000, 00:19:44.838 "nvme_ioq_poll_period_us": 0, 00:19:44.838 "io_queue_requests": 0, 00:19:44.838 "delay_cmd_submit": true, 00:19:44.838 "transport_retry_count": 4, 00:19:44.838 "bdev_retry_count": 3, 00:19:44.838 "transport_ack_timeout": 0, 00:19:44.838 "ctrlr_loss_timeout_sec": 0, 00:19:44.838 "reconnect_delay_sec": 0, 00:19:44.838 "fast_io_fail_timeout_sec": 0, 00:19:44.838 "disable_auto_failback": false, 00:19:44.838 "generate_uuids": false, 00:19:44.838 "transport_tos": 0, 00:19:44.838 "nvme_error_stat": false, 00:19:44.838 "rdma_srq_size": 0, 00:19:44.838 "io_path_stat": false, 00:19:44.838 "allow_accel_sequence": false, 00:19:44.838 "rdma_max_cq_size": 0, 00:19:44.838 "rdma_cm_event_timeout_ms": 0, 00:19:44.838 "dhchap_digests": [ 00:19:44.838 "sha256", 00:19:44.838 "sha384", 00:19:44.838 "sha512" 00:19:44.838 ], 00:19:44.838 "dhchap_dhgroups": [ 00:19:44.838 "null", 00:19:44.838 "ffdhe2048", 00:19:44.838 "ffdhe3072", 00:19:44.838 "ffdhe4096", 00:19:44.838 "ffdhe6144", 00:19:44.838 "ffdhe8192" 00:19:44.838 ] 00:19:44.838 } 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "method": "bdev_nvme_set_hotplug", 00:19:44.838 "params": { 00:19:44.838 "period_us": 100000, 00:19:44.838 "enable": false 00:19:44.838 } 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "method": "bdev_malloc_create", 00:19:44.838 "params": { 00:19:44.838 "name": "malloc0", 00:19:44.838 "num_blocks": 8192, 00:19:44.838 "block_size": 4096, 00:19:44.838 "physical_block_size": 4096, 00:19:44.838 "uuid": "456e4403-b02f-4d14-863e-a9b0f99ea9a0", 00:19:44.838 "optimal_io_boundary": 0 00:19:44.838 } 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "method": "bdev_wait_for_examine" 00:19:44.838 } 00:19:44.838 ] 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "subsystem": "nbd", 00:19:44.838 "config": [] 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "subsystem": "scheduler", 00:19:44.838 "config": [ 00:19:44.838 { 00:19:44.838 "method": "framework_set_scheduler", 00:19:44.838 "params": { 00:19:44.838 "name": "static" 00:19:44.838 } 00:19:44.838 } 00:19:44.838 ] 00:19:44.838 }, 00:19:44.838 { 00:19:44.838 "subsystem": "nvmf", 00:19:44.838 "config": [ 00:19:44.838 { 00:19:44.838 "method": "nvmf_set_config", 00:19:44.838 "params": { 00:19:44.838 "discovery_filter": "match_any", 00:19:44.838 "admin_cmd_passthru": { 00:19:44.838 "identify_ctrlr": false 00:19:44.838 } 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "nvmf_set_max_subsystems", 00:19:44.839 "params": { 00:19:44.839 "max_subsystems": 1024 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "nvmf_set_crdt", 00:19:44.839 "params": { 00:19:44.839 "crdt1": 0, 00:19:44.839 "crdt2": 0, 00:19:44.839 "crdt3": 0 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "nvmf_create_transport", 00:19:44.839 "params": { 00:19:44.839 "trtype": "TCP", 00:19:44.839 "max_queue_depth": 128, 00:19:44.839 "max_io_qpairs_per_ctrlr": 127, 00:19:44.839 "in_capsule_data_size": 4096, 00:19:44.839 "max_io_size": 131072, 00:19:44.839 "io_unit_size": 131072, 00:19:44.839 "max_aq_depth": 128, 00:19:44.839 "num_shared_buffers": 511, 00:19:44.839 "buf_cache_size": 4294967295, 00:19:44.839 "dif_insert_or_strip": false, 00:19:44.839 "zcopy": false, 00:19:44.839 "c2h_success": false, 00:19:44.839 "sock_priority": 0, 00:19:44.839 "abort_timeout_sec": 1, 00:19:44.839 "ack_timeout": 0, 00:19:44.839 "data_wr_pool_size": 0 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "nvmf_create_subsystem", 00:19:44.839 "params": { 00:19:44.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.839 "allow_any_host": false, 00:19:44.839 "serial_number": "00000000000000000000", 00:19:44.839 "model_number": "SPDK bdev Controller", 00:19:44.839 "max_namespaces": 32, 00:19:44.839 "min_cntlid": 1, 00:19:44.839 "max_cntlid": 65519, 00:19:44.839 "ana_reporting": false 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "nvmf_subsystem_add_host", 00:19:44.839 "params": { 00:19:44.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.839 "host": "nqn.2016-06.io.spdk:host1", 00:19:44.839 "psk": "key0" 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "nvmf_subsystem_add_ns", 00:19:44.839 "params": { 00:19:44.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.839 "namespace": { 00:19:44.839 "nsid": 1, 00:19:44.839 "bdev_name": "malloc0", 00:19:44.839 "nguid": "456E4403B02F4D14863EA9B0F99EA9A0", 00:19:44.839 "uuid": "456e4403-b02f-4d14-863e-a9b0f99ea9a0", 00:19:44.839 "no_auto_visible": false 00:19:44.839 } 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "nvmf_subsystem_add_listener", 00:19:44.839 "params": { 00:19:44.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.839 "listen_address": { 00:19:44.839 "trtype": "TCP", 00:19:44.839 "adrfam": "IPv4", 00:19:44.839 "traddr": "10.0.0.2", 00:19:44.839 "trsvcid": "4420" 00:19:44.839 }, 00:19:44.839 "secure_channel": false, 00:19:44.839 "sock_impl": "ssl" 00:19:44.839 } 00:19:44.839 } 00:19:44.839 ] 00:19:44.839 } 00:19:44.839 ] 00:19:44.839 }' 00:19:44.839 01:25:10 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:44.839 01:25:10 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:44.839 "subsystems": [ 00:19:44.839 { 00:19:44.839 "subsystem": "keyring", 00:19:44.839 "config": [ 00:19:44.839 { 00:19:44.839 "method": "keyring_file_add_key", 00:19:44.839 "params": { 00:19:44.839 "name": "key0", 00:19:44.839 "path": "/tmp/tmp.FewabtIDrF" 00:19:44.839 } 00:19:44.839 } 00:19:44.839 ] 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "subsystem": "iobuf", 00:19:44.839 "config": [ 00:19:44.839 { 00:19:44.839 "method": "iobuf_set_options", 00:19:44.839 "params": { 00:19:44.839 "small_pool_count": 8192, 00:19:44.839 "large_pool_count": 1024, 00:19:44.839 "small_bufsize": 8192, 00:19:44.839 "large_bufsize": 135168 00:19:44.839 } 00:19:44.839 } 00:19:44.839 ] 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "subsystem": "sock", 00:19:44.839 "config": [ 00:19:44.839 { 00:19:44.839 "method": "sock_set_default_impl", 00:19:44.839 "params": { 00:19:44.839 "impl_name": "posix" 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "sock_impl_set_options", 00:19:44.839 "params": { 00:19:44.839 "impl_name": "ssl", 00:19:44.839 "recv_buf_size": 4096, 00:19:44.839 "send_buf_size": 4096, 00:19:44.839 "enable_recv_pipe": true, 00:19:44.839 "enable_quickack": false, 00:19:44.839 "enable_placement_id": 0, 00:19:44.839 "enable_zerocopy_send_server": true, 00:19:44.839 "enable_zerocopy_send_client": false, 00:19:44.839 "zerocopy_threshold": 0, 00:19:44.839 "tls_version": 0, 00:19:44.839 "enable_ktls": false 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "sock_impl_set_options", 00:19:44.839 "params": { 00:19:44.839 "impl_name": "posix", 00:19:44.839 "recv_buf_size": 2097152, 00:19:44.839 "send_buf_size": 2097152, 00:19:44.839 "enable_recv_pipe": true, 00:19:44.839 "enable_quickack": false, 00:19:44.839 "enable_placement_id": 0, 00:19:44.839 "enable_zerocopy_send_server": true, 00:19:44.839 "enable_zerocopy_send_client": false, 00:19:44.839 "zerocopy_threshold": 0, 00:19:44.839 "tls_version": 0, 00:19:44.839 "enable_ktls": false 00:19:44.839 } 00:19:44.839 } 00:19:44.839 ] 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "subsystem": "vmd", 00:19:44.839 "config": [] 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "subsystem": "accel", 00:19:44.839 "config": [ 00:19:44.839 { 00:19:44.839 "method": "accel_set_options", 00:19:44.839 "params": { 00:19:44.839 "small_cache_size": 128, 00:19:44.839 "large_cache_size": 16, 00:19:44.839 "task_count": 2048, 00:19:44.839 "sequence_count": 2048, 00:19:44.839 "buf_count": 2048 00:19:44.839 } 00:19:44.839 } 00:19:44.839 ] 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "subsystem": "bdev", 00:19:44.839 "config": [ 00:19:44.839 { 00:19:44.839 "method": "bdev_set_options", 00:19:44.839 "params": { 00:19:44.839 "bdev_io_pool_size": 65535, 00:19:44.839 "bdev_io_cache_size": 256, 00:19:44.839 "bdev_auto_examine": true, 00:19:44.839 "iobuf_small_cache_size": 128, 00:19:44.839 "iobuf_large_cache_size": 16 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "bdev_raid_set_options", 00:19:44.839 "params": { 00:19:44.839 "process_window_size_kb": 1024 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "bdev_iscsi_set_options", 00:19:44.839 "params": { 00:19:44.839 "timeout_sec": 30 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "bdev_nvme_set_options", 00:19:44.839 "params": { 00:19:44.839 "action_on_timeout": "none", 00:19:44.839 "timeout_us": 0, 00:19:44.839 "timeout_admin_us": 0, 00:19:44.839 "keep_alive_timeout_ms": 10000, 00:19:44.839 "arbitration_burst": 0, 00:19:44.839 "low_priority_weight": 0, 00:19:44.839 "medium_priority_weight": 0, 00:19:44.839 "high_priority_weight": 0, 00:19:44.839 "nvme_adminq_poll_period_us": 10000, 00:19:44.839 "nvme_ioq_poll_period_us": 0, 00:19:44.839 "io_queue_requests": 512, 00:19:44.839 "delay_cmd_submit": true, 00:19:44.839 "transport_retry_count": 4, 00:19:44.839 "bdev_retry_count": 3, 00:19:44.839 "transport_ack_timeout": 0, 00:19:44.839 "ctrlr_loss_timeout_sec": 0, 00:19:44.839 "reconnect_delay_sec": 0, 00:19:44.839 "fast_io_fail_timeout_sec": 0, 00:19:44.839 "disable_auto_failback": false, 00:19:44.839 "generate_uuids": false, 00:19:44.839 "transport_tos": 0, 00:19:44.839 "nvme_error_stat": false, 00:19:44.839 "rdma_srq_size": 0, 00:19:44.839 "io_path_stat": false, 00:19:44.839 "allow_accel_sequence": false, 00:19:44.839 "rdma_max_cq_size": 0, 00:19:44.839 "rdma_cm_event_timeout_ms": 0, 00:19:44.839 "dhchap_digests": [ 00:19:44.839 "sha256", 00:19:44.839 "sha384", 00:19:44.839 "sha512" 00:19:44.839 ], 00:19:44.839 "dhchap_dhgroups": [ 00:19:44.839 "null", 00:19:44.839 "ffdhe2048", 00:19:44.839 "ffdhe3072", 00:19:44.839 "ffdhe4096", 00:19:44.839 "ffdhe6144", 00:19:44.839 "ffdhe8192" 00:19:44.839 ] 00:19:44.839 } 00:19:44.839 }, 00:19:44.839 { 00:19:44.839 "method": "bdev_nvme_attach_controller", 00:19:44.839 "params": { 00:19:44.839 "name": "nvme0", 00:19:44.839 "trtype": "TCP", 00:19:44.839 "adrfam": "IPv4", 00:19:44.839 "traddr": "10.0.0.2", 00:19:44.839 "trsvcid": "4420", 00:19:44.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.839 "prchk_reftag": false, 00:19:44.839 "prchk_guard": false, 00:19:44.839 "ctrlr_loss_timeout_sec": 0, 00:19:44.839 "reconnect_delay_sec": 0, 00:19:44.839 "fast_io_fail_timeout_sec": 0, 00:19:44.839 "psk": "key0", 00:19:44.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.840 "hdgst": false, 00:19:44.840 "ddgst": false 00:19:44.840 } 00:19:44.840 }, 00:19:44.840 { 00:19:44.840 "method": "bdev_nvme_set_hotplug", 00:19:44.840 "params": { 00:19:44.840 "period_us": 100000, 00:19:44.840 "enable": false 00:19:44.840 } 00:19:44.840 }, 00:19:44.840 { 00:19:44.840 "method": "bdev_enable_histogram", 00:19:44.840 "params": { 00:19:44.840 "name": "nvme0n1", 00:19:44.840 "enable": true 00:19:44.840 } 00:19:44.840 }, 00:19:44.840 { 00:19:44.840 "method": "bdev_wait_for_examine" 00:19:44.840 } 00:19:44.840 ] 00:19:44.840 }, 00:19:44.840 { 00:19:44.840 "subsystem": "nbd", 00:19:44.840 "config": [] 00:19:44.840 } 00:19:44.840 ] 00:19:44.840 }' 00:19:44.840 01:25:10 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 3422280 00:19:44.840 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3422280 ']' 00:19:44.840 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3422280 00:19:44.840 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:44.840 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:44.840 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3422280 00:19:44.840 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:44.840 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:44.840 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3422280' 00:19:44.840 killing process with pid 3422280 00:19:44.840 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3422280 00:19:44.840 Received shutdown signal, test time was about 1.000000 seconds 00:19:44.840 00:19:44.840 Latency(us) 00:19:44.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.840 =================================================================================================================== 00:19:44.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.840 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3422280 00:19:45.099 01:25:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 3422052 00:19:45.099 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3422052 ']' 00:19:45.099 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3422052 00:19:45.099 01:25:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:45.099 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.099 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3422052 00:19:45.099 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:45.099 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:45.099 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3422052' 00:19:45.099 killing process with pid 3422052 00:19:45.099 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3422052 00:19:45.099 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3422052 00:19:45.359 01:25:11 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:45.359 01:25:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.359 01:25:11 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:45.359 "subsystems": [ 00:19:45.359 { 00:19:45.359 "subsystem": "keyring", 00:19:45.359 "config": [ 00:19:45.359 { 00:19:45.359 "method": "keyring_file_add_key", 00:19:45.359 "params": { 00:19:45.359 "name": "key0", 00:19:45.359 "path": "/tmp/tmp.FewabtIDrF" 00:19:45.359 } 00:19:45.359 } 00:19:45.359 ] 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "subsystem": "iobuf", 00:19:45.359 "config": [ 00:19:45.359 { 00:19:45.359 "method": "iobuf_set_options", 00:19:45.359 "params": { 00:19:45.359 "small_pool_count": 8192, 00:19:45.359 "large_pool_count": 1024, 00:19:45.359 "small_bufsize": 8192, 00:19:45.359 "large_bufsize": 135168 00:19:45.359 } 00:19:45.359 } 00:19:45.359 ] 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "subsystem": "sock", 00:19:45.359 "config": [ 00:19:45.359 { 00:19:45.359 "method": "sock_set_default_impl", 00:19:45.359 "params": { 00:19:45.359 "impl_name": "posix" 00:19:45.359 } 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "method": "sock_impl_set_options", 00:19:45.359 "params": { 00:19:45.359 "impl_name": "ssl", 00:19:45.359 "recv_buf_size": 4096, 00:19:45.359 "send_buf_size": 4096, 00:19:45.359 "enable_recv_pipe": true, 00:19:45.359 "enable_quickack": false, 00:19:45.359 "enable_placement_id": 0, 00:19:45.359 "enable_zerocopy_send_server": true, 00:19:45.359 "enable_zerocopy_send_client": false, 00:19:45.359 "zerocopy_threshold": 0, 00:19:45.359 "tls_version": 0, 00:19:45.359 "enable_ktls": false 00:19:45.359 } 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "method": "sock_impl_set_options", 00:19:45.359 "params": { 00:19:45.359 "impl_name": "posix", 00:19:45.359 "recv_buf_size": 2097152, 00:19:45.359 "send_buf_size": 2097152, 00:19:45.359 "enable_recv_pipe": true, 00:19:45.359 "enable_quickack": false, 00:19:45.359 "enable_placement_id": 0, 00:19:45.359 "enable_zerocopy_send_server": true, 00:19:45.359 "enable_zerocopy_send_client": false, 00:19:45.359 "zerocopy_threshold": 0, 00:19:45.359 "tls_version": 0, 00:19:45.359 "enable_ktls": false 00:19:45.359 } 00:19:45.359 } 00:19:45.359 ] 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "subsystem": "vmd", 00:19:45.359 "config": [] 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "subsystem": "accel", 00:19:45.359 "config": [ 00:19:45.359 { 00:19:45.359 "method": "accel_set_options", 00:19:45.359 "params": { 00:19:45.359 "small_cache_size": 128, 00:19:45.359 "large_cache_size": 16, 00:19:45.359 "task_count": 2048, 00:19:45.359 "sequence_count": 2048, 00:19:45.359 "buf_count": 2048 00:19:45.359 } 00:19:45.359 } 00:19:45.359 ] 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "subsystem": "bdev", 00:19:45.359 "config": [ 00:19:45.359 { 00:19:45.359 "method": "bdev_set_options", 00:19:45.359 "params": { 00:19:45.359 "bdev_io_pool_size": 65535, 00:19:45.359 "bdev_io_cache_size": 256, 00:19:45.359 "bdev_auto_examine": true, 00:19:45.359 "iobuf_small_cache_size": 128, 00:19:45.359 "iobuf_large_cache_size": 16 00:19:45.359 } 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "method": "bdev_raid_set_options", 00:19:45.359 "params": { 00:19:45.359 "process_window_size_kb": 1024 00:19:45.359 } 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "method": "bdev_iscsi_set_options", 00:19:45.359 "params": { 00:19:45.359 "timeout_sec": 30 00:19:45.359 } 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "method": "bdev_nvme_set_options", 00:19:45.359 "params": { 00:19:45.359 "action_on_timeout": "none", 00:19:45.359 "timeout_us": 0, 00:19:45.359 "timeout_admin_us": 0, 00:19:45.359 "keep_alive_timeout_ms": 10000, 00:19:45.359 "arbitration_burst": 0, 00:19:45.359 "low_priority_weight": 0, 00:19:45.359 "medium_priority_weight": 0, 00:19:45.359 "high_priority_weight": 0, 00:19:45.359 "nvme_adminq_poll_period_us": 10000, 00:19:45.359 "nvme_ioq_poll_period_us": 0, 00:19:45.359 "io_queue_requests": 0, 00:19:45.359 "delay_cmd_submit": true, 00:19:45.359 "transport_retry_count": 4, 00:19:45.359 "bdev_retry_count": 3, 00:19:45.359 "transport_ack_timeout": 0, 00:19:45.359 "ctrlr_loss_timeout_sec": 0, 00:19:45.359 "reconnect_delay_sec": 0, 00:19:45.359 "fast_io_fail_timeout_sec": 0, 00:19:45.359 "disable_auto_failback": false, 00:19:45.359 "generate_uuids": false, 00:19:45.359 "transport_tos": 0, 00:19:45.359 "nvme_error_stat": false, 00:19:45.359 "rdma_srq_size": 0, 00:19:45.359 "io_path_stat": false, 00:19:45.359 "allow_accel_sequence": false, 00:19:45.359 "rdma_max_cq_size": 0, 00:19:45.359 "rdma_cm_event_timeout_ms": 0, 00:19:45.359 "dhchap_digests": [ 00:19:45.359 "sha256", 00:19:45.359 "sha384", 00:19:45.359 "sha512" 00:19:45.359 ], 00:19:45.359 "dhchap_dhgroups": [ 00:19:45.359 "null", 00:19:45.359 "ffdhe2048", 00:19:45.359 "ffdhe3072", 00:19:45.359 "ffdhe4096", 00:19:45.359 "ffdhe6144", 00:19:45.359 "ffdhe8192" 00:19:45.359 ] 00:19:45.359 } 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "method": "bdev_nvme_set_hotplug", 00:19:45.359 "params": { 00:19:45.359 "period_us": 100000, 00:19:45.359 "enable": false 00:19:45.359 } 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "method": "bdev_malloc_create", 00:19:45.359 "params": { 00:19:45.359 "name": "malloc0", 00:19:45.359 "num_blocks": 8192, 00:19:45.359 "block_size": 4096, 00:19:45.359 "physical_block_size": 4096, 00:19:45.359 "uuid": "456e4403-b02f-4d14-863e-a9b0f99ea9a0", 00:19:45.359 "optimal_io_boundary": 0 00:19:45.359 } 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "method": "bdev_wait_for_examine" 00:19:45.359 } 00:19:45.359 ] 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "subsystem": "nbd", 00:19:45.359 "config": [] 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "subsystem": "scheduler", 00:19:45.359 "config": [ 00:19:45.359 { 00:19:45.359 "method": "framework_set_scheduler", 00:19:45.359 "params": { 00:19:45.359 "name": "static" 00:19:45.359 } 00:19:45.359 } 00:19:45.359 ] 00:19:45.359 }, 00:19:45.359 { 00:19:45.359 "subsystem": "nvmf", 00:19:45.359 "config": [ 00:19:45.359 { 00:19:45.359 "method": "nvmf_set_config", 00:19:45.359 "params": { 00:19:45.359 "discovery_filter": "match_any", 00:19:45.360 "admin_cmd_passthru": { 00:19:45.360 "identify_ctrlr": false 00:19:45.360 } 00:19:45.360 } 00:19:45.360 }, 00:19:45.360 { 00:19:45.360 "method": "nvmf_set_max_subsystems", 00:19:45.360 "params": { 00:19:45.360 "max_subsystems": 1024 00:19:45.360 } 00:19:45.360 }, 00:19:45.360 { 00:19:45.360 "method": "nvmf_set_crdt", 00:19:45.360 "params": { 00:19:45.360 "crdt1": 0, 00:19:45.360 "crdt2": 0, 00:19:45.360 "crdt3": 0 00:19:45.360 } 00:19:45.360 }, 00:19:45.360 { 00:19:45.360 "method": "nvmf_create_transport", 00:19:45.360 "params": { 00:19:45.360 "trtype": "TCP", 00:19:45.360 "max_queue_depth": 128, 00:19:45.360 "max_io_qpairs_per_ctrlr": 127, 00:19:45.360 "in_capsule_data_size": 4096, 00:19:45.360 "max_io_size": 131072, 00:19:45.360 "io_unit_size": 131072, 00:19:45.360 "max_aq_depth": 128, 00:19:45.360 "num_shared_buffers": 511, 00:19:45.360 "buf_cache_size": 4294967295, 00:19:45.360 "dif_insert_or_strip": false, 00:19:45.360 "zcopy": false, 00:19:45.360 "c2h_success": false, 00:19:45.360 "sock_priority": 0, 00:19:45.360 "abort_timeout_sec": 1, 00:19:45.360 "ack_timeout": 0, 00:19:45.360 "data_wr_pool_size": 0 00:19:45.360 } 00:19:45.360 }, 00:19:45.360 { 00:19:45.360 "method": "nvmf_create_subsystem", 00:19:45.360 "params": { 00:19:45.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.360 "allow_any_host": false, 00:19:45.360 "serial_number": "00000000000000000000", 00:19:45.360 "model_number": "SPDK bdev Controller", 00:19:45.360 "max_namespaces": 32, 00:19:45.360 "min_cntlid": 1, 00:19:45.360 "max_cntlid": 65519, 00:19:45.360 "ana_reporting": false 00:19:45.360 } 00:19:45.360 }, 00:19:45.360 { 00:19:45.360 "method": "nvmf_subsystem_add_host", 00:19:45.360 "params": { 00:19:45.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.360 "host": "nqn.2016-06.io.spdk:host1", 00:19:45.360 "psk": "key0" 00:19:45.360 } 00:19:45.360 }, 00:19:45.360 { 00:19:45.360 "method": "nvmf_subsystem_add_ns", 00:19:45.360 "params": { 00:19:45.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.360 "namespace": { 00:19:45.360 "nsid": 1, 00:19:45.360 "bdev_name": "malloc0", 00:19:45.360 "nguid": "456E4403B02F4D14863EA9B0F99EA9A0", 00:19:45.360 "uuid": "456e4403-b02f-4d14-863e-a9b0f99ea9a0", 00:19:45.360 "no_auto_visible": false 00:19:45.360 } 00:19:45.360 } 00:19:45.360 }, 00:19:45.360 { 00:19:45.360 "method": "nvmf_subsystem_add_listener", 00:19:45.360 "params": { 00:19:45.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.360 "listen_address": { 00:19:45.360 "trtype": "TCP", 00:19:45.360 "adrfam": "IPv4", 00:19:45.360 "traddr": "10.0.0.2", 00:19:45.360 "trsvcid": "4420" 00:19:45.360 }, 00:19:45.360 "secure_channel": false, 00:19:45.360 "sock_impl": "ssl" 00:19:45.360 } 00:19:45.360 } 00:19:45.360 ] 00:19:45.360 } 00:19:45.360 ] 00:19:45.360 }' 00:19:45.360 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.360 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.360 01:25:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3422761 00:19:45.360 01:25:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:45.360 01:25:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3422761 00:19:45.360 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3422761 ']' 00:19:45.360 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.360 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.360 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.360 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.360 01:25:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.360 [2024-07-16 01:25:11.292323] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:45.360 [2024-07-16 01:25:11.292382] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.360 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.619 [2024-07-16 01:25:11.352247] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.619 [2024-07-16 01:25:11.423189] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.619 [2024-07-16 01:25:11.423228] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.619 [2024-07-16 01:25:11.423235] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.619 [2024-07-16 01:25:11.423241] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.619 [2024-07-16 01:25:11.423246] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.619 [2024-07-16 01:25:11.423296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.877 [2024-07-16 01:25:11.635838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.877 [2024-07-16 01:25:11.667873] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.877 [2024-07-16 01:25:11.678650] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.135 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.135 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:46.135 01:25:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.135 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.135 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.394 01:25:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.394 01:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3423007 00:19:46.394 01:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3423007 /var/tmp/bdevperf.sock 00:19:46.394 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3423007 ']' 00:19:46.394 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.394 01:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:46.394 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.394 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.394 01:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:46.394 "subsystems": [ 00:19:46.394 { 00:19:46.394 "subsystem": "keyring", 00:19:46.394 "config": [ 00:19:46.394 { 00:19:46.394 "method": "keyring_file_add_key", 00:19:46.394 "params": { 00:19:46.394 "name": "key0", 00:19:46.394 "path": "/tmp/tmp.FewabtIDrF" 00:19:46.394 } 00:19:46.394 } 00:19:46.394 ] 00:19:46.394 }, 00:19:46.394 { 00:19:46.394 "subsystem": "iobuf", 00:19:46.394 "config": [ 00:19:46.394 { 00:19:46.394 "method": "iobuf_set_options", 00:19:46.394 "params": { 00:19:46.394 "small_pool_count": 8192, 00:19:46.394 "large_pool_count": 1024, 00:19:46.394 "small_bufsize": 8192, 00:19:46.394 "large_bufsize": 135168 00:19:46.394 } 00:19:46.394 } 00:19:46.394 ] 00:19:46.394 }, 00:19:46.394 { 00:19:46.394 "subsystem": "sock", 00:19:46.394 "config": [ 00:19:46.394 { 00:19:46.394 "method": "sock_set_default_impl", 00:19:46.394 "params": { 00:19:46.394 "impl_name": "posix" 00:19:46.394 } 00:19:46.394 }, 00:19:46.394 { 00:19:46.394 "method": "sock_impl_set_options", 00:19:46.394 "params": { 00:19:46.394 "impl_name": "ssl", 00:19:46.394 "recv_buf_size": 4096, 00:19:46.394 "send_buf_size": 4096, 00:19:46.394 "enable_recv_pipe": true, 00:19:46.395 "enable_quickack": false, 00:19:46.395 "enable_placement_id": 0, 00:19:46.395 "enable_zerocopy_send_server": true, 00:19:46.395 "enable_zerocopy_send_client": false, 00:19:46.395 "zerocopy_threshold": 0, 00:19:46.395 "tls_version": 0, 00:19:46.395 "enable_ktls": false 00:19:46.395 } 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "method": "sock_impl_set_options", 00:19:46.395 "params": { 00:19:46.395 "impl_name": "posix", 00:19:46.395 "recv_buf_size": 2097152, 00:19:46.395 "send_buf_size": 2097152, 00:19:46.395 "enable_recv_pipe": true, 00:19:46.395 "enable_quickack": false, 00:19:46.395 "enable_placement_id": 0, 00:19:46.395 "enable_zerocopy_send_server": true, 00:19:46.395 "enable_zerocopy_send_client": false, 00:19:46.395 "zerocopy_threshold": 0, 00:19:46.395 "tls_version": 0, 00:19:46.395 "enable_ktls": false 00:19:46.395 } 00:19:46.395 } 00:19:46.395 ] 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "subsystem": "vmd", 00:19:46.395 "config": [] 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "subsystem": "accel", 00:19:46.395 "config": [ 00:19:46.395 { 00:19:46.395 "method": "accel_set_options", 00:19:46.395 "params": { 00:19:46.395 "small_cache_size": 128, 00:19:46.395 "large_cache_size": 16, 00:19:46.395 "task_count": 2048, 00:19:46.395 "sequence_count": 2048, 00:19:46.395 "buf_count": 2048 00:19:46.395 } 00:19:46.395 } 00:19:46.395 ] 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "subsystem": "bdev", 00:19:46.395 "config": [ 00:19:46.395 { 00:19:46.395 "method": "bdev_set_options", 00:19:46.395 "params": { 00:19:46.395 "bdev_io_pool_size": 65535, 00:19:46.395 "bdev_io_cache_size": 256, 00:19:46.395 "bdev_auto_examine": true, 00:19:46.395 "iobuf_small_cache_size": 128, 00:19:46.395 "iobuf_large_cache_size": 16 00:19:46.395 } 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "method": "bdev_raid_set_options", 00:19:46.395 "params": { 00:19:46.395 "process_window_size_kb": 1024 00:19:46.395 } 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "method": "bdev_iscsi_set_options", 00:19:46.395 "params": { 00:19:46.395 "timeout_sec": 30 00:19:46.395 } 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "method": "bdev_nvme_set_options", 00:19:46.395 "params": { 00:19:46.395 "action_on_timeout": "none", 00:19:46.395 "timeout_us": 0, 00:19:46.395 "timeout_admin_us": 0, 00:19:46.395 "keep_alive_timeout_ms": 10000, 00:19:46.395 "arbitration_burst": 0, 00:19:46.395 "low_priority_weight": 0, 00:19:46.395 "medium_priority_weight": 0, 00:19:46.395 "high_priority_weight": 0, 00:19:46.395 "nvme_adminq_poll_period_us": 10000, 00:19:46.395 "nvme_ioq_poll_period_us": 0, 00:19:46.395 "io_queue_requests": 512, 00:19:46.395 "delay_cmd_submit": true, 00:19:46.395 "transport_retry_count": 4, 00:19:46.395 "bdev_retry_count": 3, 00:19:46.395 "transport_ack_timeout": 0, 00:19:46.395 "ctrlr_loss_timeout_sec": 0, 00:19:46.395 "reconnect_delay_sec": 0, 00:19:46.395 "fast_io_fail_timeout_sec": 0, 00:19:46.395 "disable_auto_failback": false, 00:19:46.395 "generate_uuids": false, 00:19:46.395 "transport_tos": 0, 00:19:46.395 "nvme_error_stat": false, 00:19:46.395 "rdma_srq_size": 0, 00:19:46.395 "io_path_stat": false, 00:19:46.395 "allow_accel_sequence": false, 00:19:46.395 "rdma_max_cq_size": 0, 00:19:46.395 "rdma_cm_event_timeout_ms": 0, 00:19:46.395 "dhchap_digests": [ 00:19:46.395 "sha256", 00:19:46.395 "sha384", 00:19:46.395 "sha512" 00:19:46.395 ], 00:19:46.395 "dhchap_dhgroups": [ 00:19:46.395 "null", 00:19:46.395 "ffdhe2048", 00:19:46.395 "ffdhe3072", 00:19:46.395 "ffdhe4096", 00:19:46.395 "ffdhe6144", 00:19:46.395 "ffdhe8192" 00:19:46.395 ] 00:19:46.395 } 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "method": "bdev_nvme_attach_controller", 00:19:46.395 "params": { 00:19:46.395 "name": "nvme0", 00:19:46.395 "trtype": "TCP", 00:19:46.395 "adrfam": "IPv4", 00:19:46.395 "traddr": "10.0.0.2", 00:19:46.395 "trsvcid": "4420", 00:19:46.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.395 "prchk_reftag": false, 00:19:46.395 "prchk_guard": false, 00:19:46.395 "ctrlr_loss_timeout_sec": 0, 00:19:46.395 "reconnect_delay_sec": 0, 00:19:46.395 "fast_io_fail_timeout_sec": 0, 00:19:46.395 "psk": "key0", 00:19:46.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.395 "hdgst": false, 00:19:46.395 "ddgst": false 00:19:46.395 } 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "method": "bdev_nvme_set_hotplug", 00:19:46.395 "params": { 00:19:46.395 "period_us": 100000, 00:19:46.395 "enable": false 00:19:46.395 } 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "method": "bdev_enable_histogram", 00:19:46.395 "params": { 00:19:46.395 "name": "nvme0n1", 00:19:46.395 "enable": true 00:19:46.395 } 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "method": "bdev_wait_for_examine" 00:19:46.395 } 00:19:46.395 ] 00:19:46.395 }, 00:19:46.395 { 00:19:46.395 "subsystem": "nbd", 00:19:46.395 "config": [] 00:19:46.395 } 00:19:46.395 ] 00:19:46.395 }' 00:19:46.395 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.395 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.395 [2024-07-16 01:25:12.168730] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:46.395 [2024-07-16 01:25:12.168777] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423007 ] 00:19:46.395 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.395 [2024-07-16 01:25:12.223691] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.395 [2024-07-16 01:25:12.295537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.654 [2024-07-16 01:25:12.444929] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.222 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.222 01:25:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:47.222 01:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:47.222 01:25:12 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:47.222 01:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.222 01:25:13 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:47.480 Running I/O for 1 seconds... 00:19:48.416 00:19:48.416 Latency(us) 00:19:48.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.416 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:48.416 Verification LBA range: start 0x0 length 0x2000 00:19:48.416 nvme0n1 : 1.02 5737.25 22.41 0.00 0.00 22130.11 5804.62 29709.65 00:19:48.416 =================================================================================================================== 00:19:48.416 Total : 5737.25 22.41 0.00 0.00 22130.11 5804.62 29709.65 00:19:48.416 0 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:48.416 nvmf_trace.0 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3423007 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3423007 ']' 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3423007 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3423007 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3423007' 00:19:48.416 killing process with pid 3423007 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3423007 00:19:48.416 Received shutdown signal, test time was about 1.000000 seconds 00:19:48.416 00:19:48.416 Latency(us) 00:19:48.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.416 =================================================================================================================== 00:19:48.416 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.416 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3423007 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:48.675 rmmod nvme_tcp 00:19:48.675 rmmod nvme_fabrics 00:19:48.675 rmmod nvme_keyring 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3422761 ']' 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3422761 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3422761 ']' 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3422761 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:48.675 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3422761 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3422761' 00:19:48.934 killing process with pid 3422761 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3422761 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3422761 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.934 01:25:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.467 01:25:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:51.467 01:25:16 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.geN8uW3FNw /tmp/tmp.KJrG5x8T7c /tmp/tmp.FewabtIDrF 00:19:51.467 00:19:51.467 real 1m24.460s 00:19:51.467 user 2m10.106s 00:19:51.467 sys 0m29.234s 00:19:51.467 01:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.467 01:25:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.467 ************************************ 00:19:51.467 END TEST nvmf_tls 00:19:51.467 ************************************ 00:19:51.467 01:25:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:51.467 01:25:16 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:51.467 01:25:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:51.467 01:25:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.467 01:25:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:51.467 ************************************ 00:19:51.467 START TEST nvmf_fips 00:19:51.467 ************************************ 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:51.467 * Looking for test storage... 00:19:51.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:51.467 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:51.468 Error setting digest 00:19:51.468 00928FBF837F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:51.468 00928FBF837F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:51.468 01:25:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.029 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.029 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:58.029 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:58.029 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:58.029 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:58.030 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:58.030 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:58.030 Found net devices under 0000:86:00.0: cvl_0_0 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:58.030 Found net devices under 0000:86:00.1: cvl_0_1 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.030 01:25:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:58.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:19:58.030 00:19:58.030 --- 10.0.0.2 ping statistics --- 00:19:58.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.030 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:19:58.030 00:19:58.030 --- 10.0.0.1 ping statistics --- 00:19:58.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.030 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3427014 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3427014 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3427014 ']' 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.030 [2024-07-16 01:25:23.116870] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:58.030 [2024-07-16 01:25:23.116915] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.030 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.030 [2024-07-16 01:25:23.176133] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.030 [2024-07-16 01:25:23.252849] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.030 [2024-07-16 01:25:23.252889] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.030 [2024-07-16 01:25:23.252895] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.030 [2024-07-16 01:25:23.252901] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.030 [2024-07-16 01:25:23.252906] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.030 [2024-07-16 01:25:23.252931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.030 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:58.031 01:25:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:58.290 [2024-07-16 01:25:24.083927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.290 [2024-07-16 01:25:24.099932] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.290 [2024-07-16 01:25:24.100133] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.290 [2024-07-16 01:25:24.128025] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:58.290 malloc0 00:19:58.290 01:25:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.290 01:25:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3427211 00:19:58.290 01:25:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3427211 /var/tmp/bdevperf.sock 00:19:58.290 01:25:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.290 01:25:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3427211 ']' 00:19:58.290 01:25:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.290 01:25:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.290 01:25:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.290 01:25:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.290 01:25:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.290 [2024-07-16 01:25:24.206991] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:19:58.290 [2024-07-16 01:25:24.207042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427211 ] 00:19:58.290 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.290 [2024-07-16 01:25:24.257969] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.550 [2024-07-16 01:25:24.330727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.117 01:25:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.117 01:25:24 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:59.117 01:25:24 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:59.376 [2024-07-16 01:25:25.128493] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.376 [2024-07-16 01:25:25.128570] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:59.376 TLSTESTn1 00:19:59.376 01:25:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:59.376 Running I/O for 10 seconds... 00:20:09.354 00:20:09.354 Latency(us) 00:20:09.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.354 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:09.354 Verification LBA range: start 0x0 length 0x2000 00:20:09.354 TLSTESTn1 : 10.01 5692.63 22.24 0.00 0.00 22452.24 5430.13 29335.16 00:20:09.354 =================================================================================================================== 00:20:09.354 Total : 5692.63 22.24 0.00 0.00 22452.24 5430.13 29335.16 00:20:09.354 0 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:09.613 nvmf_trace.0 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3427211 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3427211 ']' 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3427211 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3427211 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3427211' 00:20:09.613 killing process with pid 3427211 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3427211 00:20:09.613 Received shutdown signal, test time was about 10.000000 seconds 00:20:09.613 00:20:09.613 Latency(us) 00:20:09.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.613 =================================================================================================================== 00:20:09.613 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.613 [2024-07-16 01:25:35.477197] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:09.613 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3427211 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.873 rmmod nvme_tcp 00:20:09.873 rmmod nvme_fabrics 00:20:09.873 rmmod nvme_keyring 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3427014 ']' 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3427014 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3427014 ']' 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3427014 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3427014 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3427014' 00:20:09.873 killing process with pid 3427014 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3427014 00:20:09.873 [2024-07-16 01:25:35.762071] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:09.873 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3427014 00:20:10.133 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:10.133 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:10.133 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:10.133 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.133 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.133 01:25:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.133 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.133 01:25:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.039 01:25:38 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.039 01:25:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:12.039 00:20:12.039 real 0m20.979s 00:20:12.039 user 0m22.500s 00:20:12.039 sys 0m9.200s 00:20:12.039 01:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.039 01:25:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.039 ************************************ 00:20:12.039 END TEST nvmf_fips 00:20:12.039 ************************************ 00:20:12.298 01:25:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:12.298 01:25:38 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:12.298 01:25:38 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:12.298 01:25:38 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:12.298 01:25:38 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:12.298 01:25:38 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.298 01:25:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:17.669 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:17.669 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:17.669 Found net devices under 0000:86:00.0: cvl_0_0 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:17.669 Found net devices under 0000:86:00.1: cvl_0_1 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:17.669 01:25:43 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:17.669 01:25:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:17.669 01:25:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.669 01:25:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:17.669 ************************************ 00:20:17.669 START TEST nvmf_perf_adq 00:20:17.669 ************************************ 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:17.669 * Looking for test storage... 00:20:17.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.669 01:25:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:17.670 01:25:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.937 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:22.938 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:22.938 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:22.938 Found net devices under 0000:86:00.0: cvl_0_0 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:22.938 Found net devices under 0000:86:00.1: cvl_0_1 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:22.938 01:25:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:23.506 01:25:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:26.040 01:25:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:31.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:31.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.313 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:31.313 Found net devices under 0000:86:00.0: cvl_0_0 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:31.314 Found net devices under 0000:86:00.1: cvl_0_1 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:31.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:20:31.314 00:20:31.314 --- 10.0.0.2 ping statistics --- 00:20:31.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.314 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:31.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:20:31.314 00:20:31.314 --- 10.0.0.1 ping statistics --- 00:20:31.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.314 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3436949 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3436949 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3436949 ']' 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:31.314 01:25:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:31.314 [2024-07-16 01:25:56.902424] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:20:31.314 [2024-07-16 01:25:56.902466] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.314 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.314 [2024-07-16 01:25:56.956678] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:31.314 [2024-07-16 01:25:57.038611] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.314 [2024-07-16 01:25:57.038651] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.314 [2024-07-16 01:25:57.038658] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.314 [2024-07-16 01:25:57.038664] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.314 [2024-07-16 01:25:57.038669] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.314 [2024-07-16 01:25:57.038742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.314 [2024-07-16 01:25:57.038760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.314 [2024-07-16 01:25:57.038865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:31.314 [2024-07-16 01:25:57.038866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.882 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:32.140 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:32.141 [2024-07-16 01:25:57.904686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:32.141 Malloc1 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:32.141 [2024-07-16 01:25:57.956422] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3437083 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:32.141 01:25:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:32.141 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.046 01:25:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:34.046 01:25:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.046 01:25:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:34.046 01:25:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.046 01:25:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:34.046 "tick_rate": 2100000000, 00:20:34.046 "poll_groups": [ 00:20:34.046 { 00:20:34.046 "name": "nvmf_tgt_poll_group_000", 00:20:34.046 "admin_qpairs": 1, 00:20:34.046 "io_qpairs": 1, 00:20:34.046 "current_admin_qpairs": 1, 00:20:34.046 "current_io_qpairs": 1, 00:20:34.046 "pending_bdev_io": 0, 00:20:34.046 "completed_nvme_io": 20897, 00:20:34.046 "transports": [ 00:20:34.046 { 00:20:34.046 "trtype": "TCP" 00:20:34.046 } 00:20:34.046 ] 00:20:34.046 }, 00:20:34.046 { 00:20:34.046 "name": "nvmf_tgt_poll_group_001", 00:20:34.046 "admin_qpairs": 0, 00:20:34.046 "io_qpairs": 1, 00:20:34.046 "current_admin_qpairs": 0, 00:20:34.046 "current_io_qpairs": 1, 00:20:34.046 "pending_bdev_io": 0, 00:20:34.046 "completed_nvme_io": 20809, 00:20:34.046 "transports": [ 00:20:34.046 { 00:20:34.046 "trtype": "TCP" 00:20:34.046 } 00:20:34.046 ] 00:20:34.046 }, 00:20:34.046 { 00:20:34.046 "name": "nvmf_tgt_poll_group_002", 00:20:34.046 "admin_qpairs": 0, 00:20:34.046 "io_qpairs": 1, 00:20:34.046 "current_admin_qpairs": 0, 00:20:34.046 "current_io_qpairs": 1, 00:20:34.046 "pending_bdev_io": 0, 00:20:34.046 "completed_nvme_io": 20694, 00:20:34.046 "transports": [ 00:20:34.046 { 00:20:34.046 "trtype": "TCP" 00:20:34.046 } 00:20:34.046 ] 00:20:34.046 }, 00:20:34.046 { 00:20:34.046 "name": "nvmf_tgt_poll_group_003", 00:20:34.046 "admin_qpairs": 0, 00:20:34.046 "io_qpairs": 1, 00:20:34.046 "current_admin_qpairs": 0, 00:20:34.046 "current_io_qpairs": 1, 00:20:34.046 "pending_bdev_io": 0, 00:20:34.046 "completed_nvme_io": 20674, 00:20:34.046 "transports": [ 00:20:34.046 { 00:20:34.046 "trtype": "TCP" 00:20:34.046 } 00:20:34.046 ] 00:20:34.046 } 00:20:34.046 ] 00:20:34.046 }' 00:20:34.046 01:25:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:34.046 01:25:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:34.046 01:26:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:34.046 01:26:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:34.046 01:26:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3437083 00:20:42.161 Initializing NVMe Controllers 00:20:42.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:42.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:42.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:42.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:42.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:42.161 Initialization complete. Launching workers. 00:20:42.161 ======================================================== 00:20:42.161 Latency(us) 00:20:42.161 Device Information : IOPS MiB/s Average min max 00:20:42.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10953.80 42.79 5827.12 1937.58 58106.59 00:20:42.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11126.60 43.46 5733.17 2433.50 58943.63 00:20:42.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11001.50 42.97 5802.08 2256.40 61573.40 00:20:42.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11025.50 43.07 5788.51 1925.56 60036.98 00:20:42.161 ======================================================== 00:20:42.161 Total : 44107.40 172.29 5787.52 1925.56 61573.40 00:20:42.161 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.161 rmmod nvme_tcp 00:20:42.161 rmmod nvme_fabrics 00:20:42.161 rmmod nvme_keyring 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3436949 ']' 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3436949 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3436949 ']' 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3436949 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.161 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3436949 00:20:42.420 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:42.420 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:42.420 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3436949' 00:20:42.420 killing process with pid 3436949 00:20:42.420 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3436949 00:20:42.420 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3436949 00:20:42.420 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:42.420 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:42.420 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:42.420 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.420 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.421 01:26:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.421 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.421 01:26:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.954 01:26:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:44.954 01:26:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:44.954 01:26:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:45.892 01:26:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:47.793 01:26:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:53.055 01:26:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:53.056 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:53.056 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:53.056 Found net devices under 0000:86:00.0: cvl_0_0 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:53.056 Found net devices under 0000:86:00.1: cvl_0_1 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:20:53.056 00:20:53.056 --- 10.0.0.2 ping statistics --- 00:20:53.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.056 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:20:53.056 00:20:53.056 --- 10.0.0.1 ping statistics --- 00:20:53.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.056 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:53.056 01:26:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:53.056 01:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:53.056 01:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:53.057 net.core.busy_poll = 1 00:20:53.057 01:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:53.057 net.core.busy_read = 1 00:20:53.057 01:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:53.057 01:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3440916 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3440916 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3440916 ']' 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.315 01:26:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.315 [2024-07-16 01:26:19.281830] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:20:53.315 [2024-07-16 01:26:19.281877] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.574 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.574 [2024-07-16 01:26:19.342678] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.574 [2024-07-16 01:26:19.414420] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.574 [2024-07-16 01:26:19.414461] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.574 [2024-07-16 01:26:19.414469] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.574 [2024-07-16 01:26:19.414476] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.574 [2024-07-16 01:26:19.414498] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.574 [2024-07-16 01:26:19.414547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.574 [2024-07-16 01:26:19.414662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.574 [2024-07-16 01:26:19.414731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.574 [2024-07-16 01:26:19.414731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.146 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.146 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:54.146 01:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.146 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.146 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:54.146 01:26:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.146 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:54.146 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:54.146 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:54.146 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.146 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:54.450 [2024-07-16 01:26:20.252903] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:54.450 Malloc1 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:54.450 [2024-07-16 01:26:20.300357] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3441018 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:54.450 01:26:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:54.450 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.373 01:26:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:56.373 01:26:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.373 01:26:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:56.373 01:26:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.373 01:26:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:56.374 "tick_rate": 2100000000, 00:20:56.374 "poll_groups": [ 00:20:56.374 { 00:20:56.374 "name": "nvmf_tgt_poll_group_000", 00:20:56.374 "admin_qpairs": 1, 00:20:56.374 "io_qpairs": 4, 00:20:56.374 "current_admin_qpairs": 1, 00:20:56.374 "current_io_qpairs": 4, 00:20:56.374 "pending_bdev_io": 0, 00:20:56.374 "completed_nvme_io": 43137, 00:20:56.374 "transports": [ 00:20:56.374 { 00:20:56.374 "trtype": "TCP" 00:20:56.374 } 00:20:56.374 ] 00:20:56.374 }, 00:20:56.374 { 00:20:56.374 "name": "nvmf_tgt_poll_group_001", 00:20:56.374 "admin_qpairs": 0, 00:20:56.374 "io_qpairs": 0, 00:20:56.374 "current_admin_qpairs": 0, 00:20:56.374 "current_io_qpairs": 0, 00:20:56.374 "pending_bdev_io": 0, 00:20:56.374 "completed_nvme_io": 0, 00:20:56.374 "transports": [ 00:20:56.374 { 00:20:56.374 "trtype": "TCP" 00:20:56.374 } 00:20:56.374 ] 00:20:56.374 }, 00:20:56.374 { 00:20:56.374 "name": "nvmf_tgt_poll_group_002", 00:20:56.374 "admin_qpairs": 0, 00:20:56.374 "io_qpairs": 0, 00:20:56.374 "current_admin_qpairs": 0, 00:20:56.374 "current_io_qpairs": 0, 00:20:56.374 "pending_bdev_io": 0, 00:20:56.374 "completed_nvme_io": 0, 00:20:56.374 "transports": [ 00:20:56.374 { 00:20:56.374 "trtype": "TCP" 00:20:56.374 } 00:20:56.374 ] 00:20:56.374 }, 00:20:56.374 { 00:20:56.374 "name": "nvmf_tgt_poll_group_003", 00:20:56.374 "admin_qpairs": 0, 00:20:56.374 "io_qpairs": 0, 00:20:56.374 "current_admin_qpairs": 0, 00:20:56.374 "current_io_qpairs": 0, 00:20:56.374 "pending_bdev_io": 0, 00:20:56.374 "completed_nvme_io": 0, 00:20:56.374 "transports": [ 00:20:56.374 { 00:20:56.374 "trtype": "TCP" 00:20:56.374 } 00:20:56.374 ] 00:20:56.374 } 00:20:56.374 ] 00:20:56.374 }' 00:20:56.374 01:26:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:56.374 01:26:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:56.632 01:26:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:20:56.632 01:26:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:20:56.632 01:26:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3441018 00:21:04.746 Initializing NVMe Controllers 00:21:04.746 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:04.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:04.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:04.746 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:04.746 Initialization complete. Launching workers. 00:21:04.746 ======================================================== 00:21:04.746 Latency(us) 00:21:04.746 Device Information : IOPS MiB/s Average min max 00:21:04.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6443.10 25.17 9926.43 1352.45 54211.92 00:21:04.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5998.60 23.43 10634.12 1281.61 70411.60 00:21:04.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5669.30 22.15 11285.55 1219.03 73798.45 00:21:04.746 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5343.80 20.87 11970.78 1351.14 70604.64 00:21:04.746 ======================================================== 00:21:04.746 Total : 23454.80 91.62 10901.71 1219.03 73798.45 00:21:04.746 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:04.746 rmmod nvme_tcp 00:21:04.746 rmmod nvme_fabrics 00:21:04.746 rmmod nvme_keyring 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3440916 ']' 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3440916 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3440916 ']' 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3440916 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3440916 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3440916' 00:21:04.746 killing process with pid 3440916 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3440916 00:21:04.746 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3440916 00:21:05.005 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:05.005 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:05.005 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:05.005 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.005 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.005 01:26:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.005 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.005 01:26:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.543 01:26:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:07.543 01:26:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:07.543 00:21:07.543 real 0m49.606s 00:21:07.543 user 2m49.679s 00:21:07.543 sys 0m8.935s 00:21:07.543 01:26:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:07.543 01:26:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.543 ************************************ 00:21:07.543 END TEST nvmf_perf_adq 00:21:07.543 ************************************ 00:21:07.543 01:26:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:07.543 01:26:32 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:07.543 01:26:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:07.543 01:26:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:07.543 01:26:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:07.543 ************************************ 00:21:07.543 START TEST nvmf_shutdown 00:21:07.543 ************************************ 00:21:07.543 01:26:32 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:07.543 * Looking for test storage... 00:21:07.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:07.543 ************************************ 00:21:07.543 START TEST nvmf_shutdown_tc1 00:21:07.543 ************************************ 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:07.543 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:07.544 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.544 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.544 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.544 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:07.544 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:07.544 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.544 01:26:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:12.818 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:12.818 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.818 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:12.818 Found net devices under 0000:86:00.0: cvl_0_0 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:12.819 Found net devices under 0000:86:00.1: cvl_0_1 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.819 01:26:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:21:12.819 00:21:12.819 --- 10.0.0.2 ping statistics --- 00:21:12.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.819 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:21:12.819 00:21:12.819 --- 10.0.0.1 ping statistics --- 00:21:12.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.819 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3446230 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3446230 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3446230 ']' 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.819 01:26:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.819 [2024-07-16 01:26:38.238090] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:21:12.819 [2024-07-16 01:26:38.238135] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.819 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.819 [2024-07-16 01:26:38.296906] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.819 [2024-07-16 01:26:38.376044] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.819 [2024-07-16 01:26:38.376079] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.819 [2024-07-16 01:26:38.376085] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.819 [2024-07-16 01:26:38.376091] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.819 [2024-07-16 01:26:38.376095] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.819 [2024-07-16 01:26:38.376212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.819 [2024-07-16 01:26:38.376308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.819 [2024-07-16 01:26:38.376416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.819 [2024-07-16 01:26:38.376417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:13.078 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:13.078 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:13.078 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.078 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:13.078 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:13.337 [2024-07-16 01:26:39.077103] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.337 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:13.337 Malloc1 00:21:13.337 [2024-07-16 01:26:39.172527] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.337 Malloc2 00:21:13.337 Malloc3 00:21:13.337 Malloc4 00:21:13.337 Malloc5 00:21:13.595 Malloc6 00:21:13.595 Malloc7 00:21:13.595 Malloc8 00:21:13.595 Malloc9 00:21:13.595 Malloc10 00:21:13.595 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.595 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:13.595 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:13.595 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:13.853 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3446509 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3446509 /var/tmp/bdevperf.sock 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3446509 ']' 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.854 { 00:21:13.854 "params": { 00:21:13.854 "name": "Nvme$subsystem", 00:21:13.854 "trtype": "$TEST_TRANSPORT", 00:21:13.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.854 "adrfam": "ipv4", 00:21:13.854 "trsvcid": "$NVMF_PORT", 00:21:13.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.854 "hdgst": ${hdgst:-false}, 00:21:13.854 "ddgst": ${ddgst:-false} 00:21:13.854 }, 00:21:13.854 "method": "bdev_nvme_attach_controller" 00:21:13.854 } 00:21:13.854 EOF 00:21:13.854 )") 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.854 { 00:21:13.854 "params": { 00:21:13.854 "name": "Nvme$subsystem", 00:21:13.854 "trtype": "$TEST_TRANSPORT", 00:21:13.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.854 "adrfam": "ipv4", 00:21:13.854 "trsvcid": "$NVMF_PORT", 00:21:13.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.854 "hdgst": ${hdgst:-false}, 00:21:13.854 "ddgst": ${ddgst:-false} 00:21:13.854 }, 00:21:13.854 "method": "bdev_nvme_attach_controller" 00:21:13.854 } 00:21:13.854 EOF 00:21:13.854 )") 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.854 { 00:21:13.854 "params": { 00:21:13.854 "name": "Nvme$subsystem", 00:21:13.854 "trtype": "$TEST_TRANSPORT", 00:21:13.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.854 "adrfam": "ipv4", 00:21:13.854 "trsvcid": "$NVMF_PORT", 00:21:13.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.854 "hdgst": ${hdgst:-false}, 00:21:13.854 "ddgst": ${ddgst:-false} 00:21:13.854 }, 00:21:13.854 "method": "bdev_nvme_attach_controller" 00:21:13.854 } 00:21:13.854 EOF 00:21:13.854 )") 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.854 { 00:21:13.854 "params": { 00:21:13.854 "name": "Nvme$subsystem", 00:21:13.854 "trtype": "$TEST_TRANSPORT", 00:21:13.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.854 "adrfam": "ipv4", 00:21:13.854 "trsvcid": "$NVMF_PORT", 00:21:13.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.854 "hdgst": ${hdgst:-false}, 00:21:13.854 "ddgst": ${ddgst:-false} 00:21:13.854 }, 00:21:13.854 "method": "bdev_nvme_attach_controller" 00:21:13.854 } 00:21:13.854 EOF 00:21:13.854 )") 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.854 { 00:21:13.854 "params": { 00:21:13.854 "name": "Nvme$subsystem", 00:21:13.854 "trtype": "$TEST_TRANSPORT", 00:21:13.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.854 "adrfam": "ipv4", 00:21:13.854 "trsvcid": "$NVMF_PORT", 00:21:13.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.854 "hdgst": ${hdgst:-false}, 00:21:13.854 "ddgst": ${ddgst:-false} 00:21:13.854 }, 00:21:13.854 "method": "bdev_nvme_attach_controller" 00:21:13.854 } 00:21:13.854 EOF 00:21:13.854 )") 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.854 { 00:21:13.854 "params": { 00:21:13.854 "name": "Nvme$subsystem", 00:21:13.854 "trtype": "$TEST_TRANSPORT", 00:21:13.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.854 "adrfam": "ipv4", 00:21:13.854 "trsvcid": "$NVMF_PORT", 00:21:13.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.854 "hdgst": ${hdgst:-false}, 00:21:13.854 "ddgst": ${ddgst:-false} 00:21:13.854 }, 00:21:13.854 "method": "bdev_nvme_attach_controller" 00:21:13.854 } 00:21:13.854 EOF 00:21:13.854 )") 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.854 { 00:21:13.854 "params": { 00:21:13.854 "name": "Nvme$subsystem", 00:21:13.854 "trtype": "$TEST_TRANSPORT", 00:21:13.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.854 "adrfam": "ipv4", 00:21:13.854 "trsvcid": "$NVMF_PORT", 00:21:13.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.854 "hdgst": ${hdgst:-false}, 00:21:13.854 "ddgst": ${ddgst:-false} 00:21:13.854 }, 00:21:13.854 "method": "bdev_nvme_attach_controller" 00:21:13.854 } 00:21:13.854 EOF 00:21:13.854 )") 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:13.854 [2024-07-16 01:26:39.644174] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:21:13.854 [2024-07-16 01:26:39.644223] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.854 { 00:21:13.854 "params": { 00:21:13.854 "name": "Nvme$subsystem", 00:21:13.854 "trtype": "$TEST_TRANSPORT", 00:21:13.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.854 "adrfam": "ipv4", 00:21:13.854 "trsvcid": "$NVMF_PORT", 00:21:13.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.854 "hdgst": ${hdgst:-false}, 00:21:13.854 "ddgst": ${ddgst:-false} 00:21:13.854 }, 00:21:13.854 "method": "bdev_nvme_attach_controller" 00:21:13.854 } 00:21:13.854 EOF 00:21:13.854 )") 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.854 { 00:21:13.854 "params": { 00:21:13.854 "name": "Nvme$subsystem", 00:21:13.854 "trtype": "$TEST_TRANSPORT", 00:21:13.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.854 "adrfam": "ipv4", 00:21:13.854 "trsvcid": "$NVMF_PORT", 00:21:13.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.854 "hdgst": ${hdgst:-false}, 00:21:13.854 "ddgst": ${ddgst:-false} 00:21:13.854 }, 00:21:13.854 "method": "bdev_nvme_attach_controller" 00:21:13.854 } 00:21:13.854 EOF 00:21:13.854 )") 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.854 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.854 { 00:21:13.854 "params": { 00:21:13.854 "name": "Nvme$subsystem", 00:21:13.855 "trtype": "$TEST_TRANSPORT", 00:21:13.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.855 "adrfam": "ipv4", 00:21:13.855 "trsvcid": "$NVMF_PORT", 00:21:13.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.855 "hdgst": ${hdgst:-false}, 00:21:13.855 "ddgst": ${ddgst:-false} 00:21:13.855 }, 00:21:13.855 "method": "bdev_nvme_attach_controller" 00:21:13.855 } 00:21:13.855 EOF 00:21:13.855 )") 00:21:13.855 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:13.855 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:13.855 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.855 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:13.855 01:26:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:13.855 "params": { 00:21:13.855 "name": "Nvme1", 00:21:13.855 "trtype": "tcp", 00:21:13.855 "traddr": "10.0.0.2", 00:21:13.855 "adrfam": "ipv4", 00:21:13.855 "trsvcid": "4420", 00:21:13.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.855 "hdgst": false, 00:21:13.855 "ddgst": false 00:21:13.855 }, 00:21:13.855 "method": "bdev_nvme_attach_controller" 00:21:13.855 },{ 00:21:13.855 "params": { 00:21:13.855 "name": "Nvme2", 00:21:13.855 "trtype": "tcp", 00:21:13.855 "traddr": "10.0.0.2", 00:21:13.855 "adrfam": "ipv4", 00:21:13.855 "trsvcid": "4420", 00:21:13.855 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:13.855 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:13.855 "hdgst": false, 00:21:13.855 "ddgst": false 00:21:13.855 }, 00:21:13.855 "method": "bdev_nvme_attach_controller" 00:21:13.855 },{ 00:21:13.855 "params": { 00:21:13.855 "name": "Nvme3", 00:21:13.855 "trtype": "tcp", 00:21:13.855 "traddr": "10.0.0.2", 00:21:13.855 "adrfam": "ipv4", 00:21:13.855 "trsvcid": "4420", 00:21:13.855 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:13.855 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:13.855 "hdgst": false, 00:21:13.855 "ddgst": false 00:21:13.855 }, 00:21:13.855 "method": "bdev_nvme_attach_controller" 00:21:13.855 },{ 00:21:13.855 "params": { 00:21:13.855 "name": "Nvme4", 00:21:13.855 "trtype": "tcp", 00:21:13.855 "traddr": "10.0.0.2", 00:21:13.855 "adrfam": "ipv4", 00:21:13.855 "trsvcid": "4420", 00:21:13.855 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:13.855 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:13.855 "hdgst": false, 00:21:13.855 "ddgst": false 00:21:13.855 }, 00:21:13.855 "method": "bdev_nvme_attach_controller" 00:21:13.855 },{ 00:21:13.855 "params": { 00:21:13.855 "name": "Nvme5", 00:21:13.855 "trtype": "tcp", 00:21:13.855 "traddr": "10.0.0.2", 00:21:13.855 "adrfam": "ipv4", 00:21:13.855 "trsvcid": "4420", 00:21:13.855 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:13.855 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:13.855 "hdgst": false, 00:21:13.855 "ddgst": false 00:21:13.855 }, 00:21:13.855 "method": "bdev_nvme_attach_controller" 00:21:13.855 },{ 00:21:13.855 "params": { 00:21:13.855 "name": "Nvme6", 00:21:13.855 "trtype": "tcp", 00:21:13.855 "traddr": "10.0.0.2", 00:21:13.855 "adrfam": "ipv4", 00:21:13.855 "trsvcid": "4420", 00:21:13.855 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:13.855 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:13.855 "hdgst": false, 00:21:13.855 "ddgst": false 00:21:13.855 }, 00:21:13.855 "method": "bdev_nvme_attach_controller" 00:21:13.855 },{ 00:21:13.855 "params": { 00:21:13.855 "name": "Nvme7", 00:21:13.855 "trtype": "tcp", 00:21:13.855 "traddr": "10.0.0.2", 00:21:13.855 "adrfam": "ipv4", 00:21:13.855 "trsvcid": "4420", 00:21:13.855 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:13.855 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:13.855 "hdgst": false, 00:21:13.855 "ddgst": false 00:21:13.855 }, 00:21:13.855 "method": "bdev_nvme_attach_controller" 00:21:13.855 },{ 00:21:13.855 "params": { 00:21:13.855 "name": "Nvme8", 00:21:13.855 "trtype": "tcp", 00:21:13.855 "traddr": "10.0.0.2", 00:21:13.855 "adrfam": "ipv4", 00:21:13.855 "trsvcid": "4420", 00:21:13.855 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:13.855 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:13.855 "hdgst": false, 00:21:13.855 "ddgst": false 00:21:13.855 }, 00:21:13.855 "method": "bdev_nvme_attach_controller" 00:21:13.855 },{ 00:21:13.855 "params": { 00:21:13.855 "name": "Nvme9", 00:21:13.855 "trtype": "tcp", 00:21:13.855 "traddr": "10.0.0.2", 00:21:13.855 "adrfam": "ipv4", 00:21:13.855 "trsvcid": "4420", 00:21:13.855 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:13.855 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:13.855 "hdgst": false, 00:21:13.855 "ddgst": false 00:21:13.855 }, 00:21:13.855 "method": "bdev_nvme_attach_controller" 00:21:13.855 },{ 00:21:13.855 "params": { 00:21:13.855 "name": "Nvme10", 00:21:13.855 "trtype": "tcp", 00:21:13.855 "traddr": "10.0.0.2", 00:21:13.855 "adrfam": "ipv4", 00:21:13.855 "trsvcid": "4420", 00:21:13.855 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:13.855 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:13.855 "hdgst": false, 00:21:13.855 "ddgst": false 00:21:13.855 }, 00:21:13.855 "method": "bdev_nvme_attach_controller" 00:21:13.855 }' 00:21:13.855 [2024-07-16 01:26:39.702582] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.855 [2024-07-16 01:26:39.775202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.228 01:26:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.228 01:26:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:15.228 01:26:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:15.228 01:26:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.228 01:26:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:15.228 01:26:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.228 01:26:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3446509 00:21:15.228 01:26:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:15.228 01:26:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:16.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3446509 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3446230 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.163 { 00:21:16.163 "params": { 00:21:16.163 "name": "Nvme$subsystem", 00:21:16.163 "trtype": "$TEST_TRANSPORT", 00:21:16.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.163 "adrfam": "ipv4", 00:21:16.163 "trsvcid": "$NVMF_PORT", 00:21:16.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.163 "hdgst": ${hdgst:-false}, 00:21:16.163 "ddgst": ${ddgst:-false} 00:21:16.163 }, 00:21:16.163 "method": "bdev_nvme_attach_controller" 00:21:16.163 } 00:21:16.163 EOF 00:21:16.163 )") 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.163 { 00:21:16.163 "params": { 00:21:16.163 "name": "Nvme$subsystem", 00:21:16.163 "trtype": "$TEST_TRANSPORT", 00:21:16.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.163 "adrfam": "ipv4", 00:21:16.163 "trsvcid": "$NVMF_PORT", 00:21:16.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.163 "hdgst": ${hdgst:-false}, 00:21:16.163 "ddgst": ${ddgst:-false} 00:21:16.163 }, 00:21:16.163 "method": "bdev_nvme_attach_controller" 00:21:16.163 } 00:21:16.163 EOF 00:21:16.163 )") 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.163 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.163 { 00:21:16.163 "params": { 00:21:16.163 "name": "Nvme$subsystem", 00:21:16.163 "trtype": "$TEST_TRANSPORT", 00:21:16.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.163 "adrfam": "ipv4", 00:21:16.163 "trsvcid": "$NVMF_PORT", 00:21:16.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.163 "hdgst": ${hdgst:-false}, 00:21:16.163 "ddgst": ${ddgst:-false} 00:21:16.163 }, 00:21:16.163 "method": "bdev_nvme_attach_controller" 00:21:16.163 } 00:21:16.163 EOF 00:21:16.163 )") 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.422 { 00:21:16.422 "params": { 00:21:16.422 "name": "Nvme$subsystem", 00:21:16.422 "trtype": "$TEST_TRANSPORT", 00:21:16.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.422 "adrfam": "ipv4", 00:21:16.422 "trsvcid": "$NVMF_PORT", 00:21:16.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.422 "hdgst": ${hdgst:-false}, 00:21:16.422 "ddgst": ${ddgst:-false} 00:21:16.422 }, 00:21:16.422 "method": "bdev_nvme_attach_controller" 00:21:16.422 } 00:21:16.422 EOF 00:21:16.422 )") 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.422 { 00:21:16.422 "params": { 00:21:16.422 "name": "Nvme$subsystem", 00:21:16.422 "trtype": "$TEST_TRANSPORT", 00:21:16.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.422 "adrfam": "ipv4", 00:21:16.422 "trsvcid": "$NVMF_PORT", 00:21:16.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.422 "hdgst": ${hdgst:-false}, 00:21:16.422 "ddgst": ${ddgst:-false} 00:21:16.422 }, 00:21:16.422 "method": "bdev_nvme_attach_controller" 00:21:16.422 } 00:21:16.422 EOF 00:21:16.422 )") 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.422 { 00:21:16.422 "params": { 00:21:16.422 "name": "Nvme$subsystem", 00:21:16.422 "trtype": "$TEST_TRANSPORT", 00:21:16.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.422 "adrfam": "ipv4", 00:21:16.422 "trsvcid": "$NVMF_PORT", 00:21:16.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.422 "hdgst": ${hdgst:-false}, 00:21:16.422 "ddgst": ${ddgst:-false} 00:21:16.422 }, 00:21:16.422 "method": "bdev_nvme_attach_controller" 00:21:16.422 } 00:21:16.422 EOF 00:21:16.422 )") 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.422 { 00:21:16.422 "params": { 00:21:16.422 "name": "Nvme$subsystem", 00:21:16.422 "trtype": "$TEST_TRANSPORT", 00:21:16.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.422 "adrfam": "ipv4", 00:21:16.422 "trsvcid": "$NVMF_PORT", 00:21:16.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.422 "hdgst": ${hdgst:-false}, 00:21:16.422 "ddgst": ${ddgst:-false} 00:21:16.422 }, 00:21:16.422 "method": "bdev_nvme_attach_controller" 00:21:16.422 } 00:21:16.422 EOF 00:21:16.422 )") 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.422 [2024-07-16 01:26:42.178475] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:21:16.422 [2024-07-16 01:26:42.178522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446995 ] 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.422 { 00:21:16.422 "params": { 00:21:16.422 "name": "Nvme$subsystem", 00:21:16.422 "trtype": "$TEST_TRANSPORT", 00:21:16.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.422 "adrfam": "ipv4", 00:21:16.422 "trsvcid": "$NVMF_PORT", 00:21:16.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.422 "hdgst": ${hdgst:-false}, 00:21:16.422 "ddgst": ${ddgst:-false} 00:21:16.422 }, 00:21:16.422 "method": "bdev_nvme_attach_controller" 00:21:16.422 } 00:21:16.422 EOF 00:21:16.422 )") 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.422 { 00:21:16.422 "params": { 00:21:16.422 "name": "Nvme$subsystem", 00:21:16.422 "trtype": "$TEST_TRANSPORT", 00:21:16.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.422 "adrfam": "ipv4", 00:21:16.422 "trsvcid": "$NVMF_PORT", 00:21:16.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.422 "hdgst": ${hdgst:-false}, 00:21:16.422 "ddgst": ${ddgst:-false} 00:21:16.422 }, 00:21:16.422 "method": "bdev_nvme_attach_controller" 00:21:16.422 } 00:21:16.422 EOF 00:21:16.422 )") 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.422 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.422 { 00:21:16.422 "params": { 00:21:16.422 "name": "Nvme$subsystem", 00:21:16.422 "trtype": "$TEST_TRANSPORT", 00:21:16.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.422 "adrfam": "ipv4", 00:21:16.422 "trsvcid": "$NVMF_PORT", 00:21:16.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.423 "hdgst": ${hdgst:-false}, 00:21:16.423 "ddgst": ${ddgst:-false} 00:21:16.423 }, 00:21:16.423 "method": "bdev_nvme_attach_controller" 00:21:16.423 } 00:21:16.423 EOF 00:21:16.423 )") 00:21:16.423 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.423 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:16.423 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.423 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:16.423 01:26:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:16.423 "params": { 00:21:16.423 "name": "Nvme1", 00:21:16.423 "trtype": "tcp", 00:21:16.423 "traddr": "10.0.0.2", 00:21:16.423 "adrfam": "ipv4", 00:21:16.423 "trsvcid": "4420", 00:21:16.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.423 "hdgst": false, 00:21:16.423 "ddgst": false 00:21:16.423 }, 00:21:16.423 "method": "bdev_nvme_attach_controller" 00:21:16.423 },{ 00:21:16.423 "params": { 00:21:16.423 "name": "Nvme2", 00:21:16.423 "trtype": "tcp", 00:21:16.423 "traddr": "10.0.0.2", 00:21:16.423 "adrfam": "ipv4", 00:21:16.423 "trsvcid": "4420", 00:21:16.423 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:16.423 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:16.423 "hdgst": false, 00:21:16.423 "ddgst": false 00:21:16.423 }, 00:21:16.423 "method": "bdev_nvme_attach_controller" 00:21:16.423 },{ 00:21:16.423 "params": { 00:21:16.423 "name": "Nvme3", 00:21:16.423 "trtype": "tcp", 00:21:16.423 "traddr": "10.0.0.2", 00:21:16.423 "adrfam": "ipv4", 00:21:16.423 "trsvcid": "4420", 00:21:16.423 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:16.423 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:16.423 "hdgst": false, 00:21:16.423 "ddgst": false 00:21:16.423 }, 00:21:16.423 "method": "bdev_nvme_attach_controller" 00:21:16.423 },{ 00:21:16.423 "params": { 00:21:16.423 "name": "Nvme4", 00:21:16.423 "trtype": "tcp", 00:21:16.423 "traddr": "10.0.0.2", 00:21:16.423 "adrfam": "ipv4", 00:21:16.423 "trsvcid": "4420", 00:21:16.423 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:16.423 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:16.423 "hdgst": false, 00:21:16.423 "ddgst": false 00:21:16.423 }, 00:21:16.423 "method": "bdev_nvme_attach_controller" 00:21:16.423 },{ 00:21:16.423 "params": { 00:21:16.423 "name": "Nvme5", 00:21:16.423 "trtype": "tcp", 00:21:16.423 "traddr": "10.0.0.2", 00:21:16.423 "adrfam": "ipv4", 00:21:16.423 "trsvcid": "4420", 00:21:16.423 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:16.423 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:16.423 "hdgst": false, 00:21:16.423 "ddgst": false 00:21:16.423 }, 00:21:16.423 "method": "bdev_nvme_attach_controller" 00:21:16.423 },{ 00:21:16.423 "params": { 00:21:16.423 "name": "Nvme6", 00:21:16.423 "trtype": "tcp", 00:21:16.423 "traddr": "10.0.0.2", 00:21:16.423 "adrfam": "ipv4", 00:21:16.423 "trsvcid": "4420", 00:21:16.423 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:16.423 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:16.423 "hdgst": false, 00:21:16.423 "ddgst": false 00:21:16.423 }, 00:21:16.423 "method": "bdev_nvme_attach_controller" 00:21:16.423 },{ 00:21:16.423 "params": { 00:21:16.423 "name": "Nvme7", 00:21:16.423 "trtype": "tcp", 00:21:16.423 "traddr": "10.0.0.2", 00:21:16.423 "adrfam": "ipv4", 00:21:16.423 "trsvcid": "4420", 00:21:16.423 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:16.423 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:16.423 "hdgst": false, 00:21:16.423 "ddgst": false 00:21:16.423 }, 00:21:16.423 "method": "bdev_nvme_attach_controller" 00:21:16.423 },{ 00:21:16.423 "params": { 00:21:16.423 "name": "Nvme8", 00:21:16.423 "trtype": "tcp", 00:21:16.423 "traddr": "10.0.0.2", 00:21:16.423 "adrfam": "ipv4", 00:21:16.423 "trsvcid": "4420", 00:21:16.423 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:16.423 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:16.423 "hdgst": false, 00:21:16.423 "ddgst": false 00:21:16.423 }, 00:21:16.423 "method": "bdev_nvme_attach_controller" 00:21:16.423 },{ 00:21:16.423 "params": { 00:21:16.423 "name": "Nvme9", 00:21:16.423 "trtype": "tcp", 00:21:16.423 "traddr": "10.0.0.2", 00:21:16.423 "adrfam": "ipv4", 00:21:16.423 "trsvcid": "4420", 00:21:16.423 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:16.423 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:16.423 "hdgst": false, 00:21:16.423 "ddgst": false 00:21:16.423 }, 00:21:16.423 "method": "bdev_nvme_attach_controller" 00:21:16.423 },{ 00:21:16.423 "params": { 00:21:16.423 "name": "Nvme10", 00:21:16.423 "trtype": "tcp", 00:21:16.423 "traddr": "10.0.0.2", 00:21:16.423 "adrfam": "ipv4", 00:21:16.423 "trsvcid": "4420", 00:21:16.423 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:16.423 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:16.423 "hdgst": false, 00:21:16.423 "ddgst": false 00:21:16.423 }, 00:21:16.423 "method": "bdev_nvme_attach_controller" 00:21:16.423 }' 00:21:16.423 [2024-07-16 01:26:42.235696] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.423 [2024-07-16 01:26:42.308671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.798 Running I/O for 1 seconds... 00:21:19.172 00:21:19.173 Latency(us) 00:21:19.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.173 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.173 Verification LBA range: start 0x0 length 0x400 00:21:19.173 Nvme1n1 : 1.04 246.67 15.42 0.00 0.00 256763.61 17725.93 214708.42 00:21:19.173 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.173 Verification LBA range: start 0x0 length 0x400 00:21:19.173 Nvme2n1 : 1.09 239.31 14.96 0.00 0.00 258677.26 4119.41 230686.72 00:21:19.173 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.173 Verification LBA range: start 0x0 length 0x400 00:21:19.173 Nvme3n1 : 1.12 336.38 21.02 0.00 0.00 179106.84 7427.41 204721.98 00:21:19.173 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.173 Verification LBA range: start 0x0 length 0x400 00:21:19.173 Nvme4n1 : 1.09 301.66 18.85 0.00 0.00 199548.94 5242.88 211712.49 00:21:19.173 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.173 Verification LBA range: start 0x0 length 0x400 00:21:19.173 Nvme5n1 : 1.13 283.98 17.75 0.00 0.00 211187.03 16852.11 211712.49 00:21:19.173 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.173 Verification LBA range: start 0x0 length 0x400 00:21:19.173 Nvme6n1 : 1.13 282.66 17.67 0.00 0.00 209124.50 14979.66 219701.64 00:21:19.173 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.173 Verification LBA range: start 0x0 length 0x400 00:21:19.173 Nvme7n1 : 1.12 289.84 18.12 0.00 0.00 197128.90 3838.54 210713.84 00:21:19.173 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.173 Verification LBA range: start 0x0 length 0x400 00:21:19.173 Nvme8n1 : 1.15 335.18 20.95 0.00 0.00 171265.63 9424.70 202724.69 00:21:19.173 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.173 Verification LBA range: start 0x0 length 0x400 00:21:19.173 Nvme9n1 : 1.14 280.92 17.56 0.00 0.00 201355.07 18350.08 231685.36 00:21:19.173 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.173 Verification LBA range: start 0x0 length 0x400 00:21:19.173 Nvme10n1 : 1.14 288.44 18.03 0.00 0.00 192744.67 1014.25 218702.99 00:21:19.173 =================================================================================================================== 00:21:19.173 Total : 2885.04 180.32 0.00 0.00 204457.73 1014.25 231685.36 00:21:19.173 01:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:19.173 01:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:19.173 01:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:19.173 01:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:19.173 01:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:19.173 01:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.173 01:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:19.173 01:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.173 01:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:19.173 01:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.173 01:26:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.173 rmmod nvme_tcp 00:21:19.173 rmmod nvme_fabrics 00:21:19.173 rmmod nvme_keyring 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3446230 ']' 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3446230 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3446230 ']' 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3446230 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3446230 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3446230' 00:21:19.173 killing process with pid 3446230 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3446230 00:21:19.173 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3446230 00:21:19.742 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.742 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.742 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.742 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.742 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.742 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.742 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.742 01:26:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.647 00:21:21.647 real 0m14.410s 00:21:21.647 user 0m33.501s 00:21:21.647 sys 0m5.089s 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.647 ************************************ 00:21:21.647 END TEST nvmf_shutdown_tc1 00:21:21.647 ************************************ 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.647 ************************************ 00:21:21.647 START TEST nvmf_shutdown_tc2 00:21:21.647 ************************************ 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:21.647 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:21.907 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:21.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:21.907 Found net devices under 0000:86:00.0: cvl_0_0 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:21.907 Found net devices under 0000:86:00.1: cvl_0_1 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:21.907 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:21.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:21:21.908 00:21:21.908 --- 10.0.0.2 ping statistics --- 00:21:21.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.908 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:21:21.908 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:21:22.166 00:21:22.166 --- 10.0.0.1 ping statistics --- 00:21:22.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.166 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3448020 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3448020 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3448020 ']' 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.166 01:26:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:22.166 [2024-07-16 01:26:47.994297] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:21:22.166 [2024-07-16 01:26:47.994349] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.166 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.166 [2024-07-16 01:26:48.054368] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:22.166 [2024-07-16 01:26:48.126984] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.167 [2024-07-16 01:26:48.127025] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.167 [2024-07-16 01:26:48.127031] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.167 [2024-07-16 01:26:48.127037] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.167 [2024-07-16 01:26:48.127042] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.167 [2024-07-16 01:26:48.127158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.167 [2024-07-16 01:26:48.127255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.167 [2024-07-16 01:26:48.127346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.167 [2024-07-16 01:26:48.127360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.098 [2024-07-16 01:26:48.840255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.098 01:26:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.098 Malloc1 00:21:23.098 [2024-07-16 01:26:48.935717] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.098 Malloc2 00:21:23.098 Malloc3 00:21:23.098 Malloc4 00:21:23.098 Malloc5 00:21:23.356 Malloc6 00:21:23.356 Malloc7 00:21:23.356 Malloc8 00:21:23.356 Malloc9 00:21:23.356 Malloc10 00:21:23.356 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.356 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:23.356 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.356 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3448297 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3448297 /var/tmp/bdevperf.sock 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3448297 ']' 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.614 { 00:21:23.614 "params": { 00:21:23.614 "name": "Nvme$subsystem", 00:21:23.614 "trtype": "$TEST_TRANSPORT", 00:21:23.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.614 "adrfam": "ipv4", 00:21:23.614 "trsvcid": "$NVMF_PORT", 00:21:23.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.614 "hdgst": ${hdgst:-false}, 00:21:23.614 "ddgst": ${ddgst:-false} 00:21:23.614 }, 00:21:23.614 "method": "bdev_nvme_attach_controller" 00:21:23.614 } 00:21:23.614 EOF 00:21:23.614 )") 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.614 { 00:21:23.614 "params": { 00:21:23.614 "name": "Nvme$subsystem", 00:21:23.614 "trtype": "$TEST_TRANSPORT", 00:21:23.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.614 "adrfam": "ipv4", 00:21:23.614 "trsvcid": "$NVMF_PORT", 00:21:23.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.614 "hdgst": ${hdgst:-false}, 00:21:23.614 "ddgst": ${ddgst:-false} 00:21:23.614 }, 00:21:23.614 "method": "bdev_nvme_attach_controller" 00:21:23.614 } 00:21:23.614 EOF 00:21:23.614 )") 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.614 { 00:21:23.614 "params": { 00:21:23.614 "name": "Nvme$subsystem", 00:21:23.614 "trtype": "$TEST_TRANSPORT", 00:21:23.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.614 "adrfam": "ipv4", 00:21:23.614 "trsvcid": "$NVMF_PORT", 00:21:23.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.614 "hdgst": ${hdgst:-false}, 00:21:23.614 "ddgst": ${ddgst:-false} 00:21:23.614 }, 00:21:23.614 "method": "bdev_nvme_attach_controller" 00:21:23.614 } 00:21:23.614 EOF 00:21:23.614 )") 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.614 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.614 { 00:21:23.614 "params": { 00:21:23.614 "name": "Nvme$subsystem", 00:21:23.614 "trtype": "$TEST_TRANSPORT", 00:21:23.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.614 "adrfam": "ipv4", 00:21:23.614 "trsvcid": "$NVMF_PORT", 00:21:23.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.615 "hdgst": ${hdgst:-false}, 00:21:23.615 "ddgst": ${ddgst:-false} 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 } 00:21:23.615 EOF 00:21:23.615 )") 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.615 { 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme$subsystem", 00:21:23.615 "trtype": "$TEST_TRANSPORT", 00:21:23.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "$NVMF_PORT", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.615 "hdgst": ${hdgst:-false}, 00:21:23.615 "ddgst": ${ddgst:-false} 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 } 00:21:23.615 EOF 00:21:23.615 )") 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.615 { 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme$subsystem", 00:21:23.615 "trtype": "$TEST_TRANSPORT", 00:21:23.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "$NVMF_PORT", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.615 "hdgst": ${hdgst:-false}, 00:21:23.615 "ddgst": ${ddgst:-false} 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 } 00:21:23.615 EOF 00:21:23.615 )") 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.615 { 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme$subsystem", 00:21:23.615 "trtype": "$TEST_TRANSPORT", 00:21:23.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "$NVMF_PORT", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.615 "hdgst": ${hdgst:-false}, 00:21:23.615 "ddgst": ${ddgst:-false} 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 } 00:21:23.615 EOF 00:21:23.615 )") 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:23.615 [2024-07-16 01:26:49.416282] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:21:23.615 [2024-07-16 01:26:49.416329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3448297 ] 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.615 { 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme$subsystem", 00:21:23.615 "trtype": "$TEST_TRANSPORT", 00:21:23.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "$NVMF_PORT", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.615 "hdgst": ${hdgst:-false}, 00:21:23.615 "ddgst": ${ddgst:-false} 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 } 00:21:23.615 EOF 00:21:23.615 )") 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.615 { 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme$subsystem", 00:21:23.615 "trtype": "$TEST_TRANSPORT", 00:21:23.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "$NVMF_PORT", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.615 "hdgst": ${hdgst:-false}, 00:21:23.615 "ddgst": ${ddgst:-false} 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 } 00:21:23.615 EOF 00:21:23.615 )") 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.615 { 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme$subsystem", 00:21:23.615 "trtype": "$TEST_TRANSPORT", 00:21:23.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "$NVMF_PORT", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.615 "hdgst": ${hdgst:-false}, 00:21:23.615 "ddgst": ${ddgst:-false} 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 } 00:21:23.615 EOF 00:21:23.615 )") 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:23.615 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:23.615 01:26:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme1", 00:21:23.615 "trtype": "tcp", 00:21:23.615 "traddr": "10.0.0.2", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "4420", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.615 "hdgst": false, 00:21:23.615 "ddgst": false 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 },{ 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme2", 00:21:23.615 "trtype": "tcp", 00:21:23.615 "traddr": "10.0.0.2", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "4420", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:23.615 "hdgst": false, 00:21:23.615 "ddgst": false 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 },{ 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme3", 00:21:23.615 "trtype": "tcp", 00:21:23.615 "traddr": "10.0.0.2", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "4420", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:23.615 "hdgst": false, 00:21:23.615 "ddgst": false 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 },{ 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme4", 00:21:23.615 "trtype": "tcp", 00:21:23.615 "traddr": "10.0.0.2", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "4420", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:23.615 "hdgst": false, 00:21:23.615 "ddgst": false 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 },{ 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme5", 00:21:23.615 "trtype": "tcp", 00:21:23.615 "traddr": "10.0.0.2", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "4420", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:23.615 "hdgst": false, 00:21:23.615 "ddgst": false 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 },{ 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme6", 00:21:23.615 "trtype": "tcp", 00:21:23.615 "traddr": "10.0.0.2", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "4420", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:23.615 "hdgst": false, 00:21:23.615 "ddgst": false 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 },{ 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme7", 00:21:23.615 "trtype": "tcp", 00:21:23.615 "traddr": "10.0.0.2", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "4420", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:23.615 "hdgst": false, 00:21:23.615 "ddgst": false 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 },{ 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme8", 00:21:23.615 "trtype": "tcp", 00:21:23.615 "traddr": "10.0.0.2", 00:21:23.615 "adrfam": "ipv4", 00:21:23.615 "trsvcid": "4420", 00:21:23.615 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:23.615 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:23.615 "hdgst": false, 00:21:23.615 "ddgst": false 00:21:23.615 }, 00:21:23.615 "method": "bdev_nvme_attach_controller" 00:21:23.615 },{ 00:21:23.615 "params": { 00:21:23.615 "name": "Nvme9", 00:21:23.615 "trtype": "tcp", 00:21:23.616 "traddr": "10.0.0.2", 00:21:23.616 "adrfam": "ipv4", 00:21:23.616 "trsvcid": "4420", 00:21:23.616 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:23.616 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:23.616 "hdgst": false, 00:21:23.616 "ddgst": false 00:21:23.616 }, 00:21:23.616 "method": "bdev_nvme_attach_controller" 00:21:23.616 },{ 00:21:23.616 "params": { 00:21:23.616 "name": "Nvme10", 00:21:23.616 "trtype": "tcp", 00:21:23.616 "traddr": "10.0.0.2", 00:21:23.616 "adrfam": "ipv4", 00:21:23.616 "trsvcid": "4420", 00:21:23.616 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:23.616 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:23.616 "hdgst": false, 00:21:23.616 "ddgst": false 00:21:23.616 }, 00:21:23.616 "method": "bdev_nvme_attach_controller" 00:21:23.616 }' 00:21:23.616 [2024-07-16 01:26:49.474184] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.616 [2024-07-16 01:26:49.545884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.513 Running I/O for 10 seconds... 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:25.513 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.514 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:25.514 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:25.514 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:25.771 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:25.771 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:25.771 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:25.771 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:25.771 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.771 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:25.771 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.771 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:25.771 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:25.771 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3448297 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3448297 ']' 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3448297 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.028 01:26:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3448297 00:21:26.028 01:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:26.028 01:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:26.029 01:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3448297' 00:21:26.029 killing process with pid 3448297 00:21:26.029 01:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3448297 00:21:26.029 01:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3448297 00:21:26.286 Received shutdown signal, test time was about 0.902550 seconds 00:21:26.286 00:21:26.286 Latency(us) 00:21:26.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.286 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.286 Verification LBA range: start 0x0 length 0x400 00:21:26.286 Nvme1n1 : 0.90 284.99 17.81 0.00 0.00 221936.15 17850.76 213709.78 00:21:26.286 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.286 Verification LBA range: start 0x0 length 0x400 00:21:26.286 Nvme2n1 : 0.90 291.91 18.24 0.00 0.00 212564.44 3510.86 210713.84 00:21:26.286 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.286 Verification LBA range: start 0x0 length 0x400 00:21:26.286 Nvme3n1 : 0.87 302.66 18.92 0.00 0.00 200287.34 3791.73 211712.49 00:21:26.286 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.286 Verification LBA range: start 0x0 length 0x400 00:21:26.286 Nvme4n1 : 0.88 292.23 18.26 0.00 0.00 204945.80 13481.69 215707.06 00:21:26.286 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.286 Verification LBA range: start 0x0 length 0x400 00:21:26.286 Nvme5n1 : 0.89 287.01 17.94 0.00 0.00 205154.50 15416.56 206719.27 00:21:26.286 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.286 Verification LBA range: start 0x0 length 0x400 00:21:26.286 Nvme6n1 : 0.89 288.72 18.05 0.00 0.00 199904.06 17975.59 211712.49 00:21:26.286 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.286 Verification LBA range: start 0x0 length 0x400 00:21:26.286 Nvme7n1 : 0.88 290.23 18.14 0.00 0.00 195027.63 16352.79 208716.56 00:21:26.286 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.286 Verification LBA range: start 0x0 length 0x400 00:21:26.287 Nvme8n1 : 0.89 287.68 17.98 0.00 0.00 193034.36 15042.07 216705.71 00:21:26.287 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.287 Verification LBA range: start 0x0 length 0x400 00:21:26.287 Nvme9n1 : 0.90 283.85 17.74 0.00 0.00 191765.70 7583.45 214708.42 00:21:26.287 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:26.287 Verification LBA range: start 0x0 length 0x400 00:21:26.287 Nvme10n1 : 0.86 223.03 13.94 0.00 0.00 237771.50 23343.30 226692.14 00:21:26.287 =================================================================================================================== 00:21:26.287 Total : 2832.30 177.02 0.00 0.00 205431.29 3510.86 226692.14 00:21:26.287 01:26:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:27.673 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3448020 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.674 rmmod nvme_tcp 00:21:27.674 rmmod nvme_fabrics 00:21:27.674 rmmod nvme_keyring 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3448020 ']' 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3448020 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3448020 ']' 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3448020 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3448020 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3448020' 00:21:27.674 killing process with pid 3448020 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3448020 00:21:27.674 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3448020 00:21:27.988 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.988 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:27.988 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:27.988 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.988 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.988 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.988 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.988 01:26:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.889 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:29.889 00:21:29.889 real 0m8.242s 00:21:29.889 user 0m25.352s 00:21:29.889 sys 0m1.356s 00:21:29.889 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:29.889 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:29.889 ************************************ 00:21:29.889 END TEST nvmf_shutdown_tc2 00:21:29.889 ************************************ 00:21:30.148 01:26:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:30.149 ************************************ 00:21:30.149 START TEST nvmf_shutdown_tc3 00:21:30.149 ************************************ 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:30.149 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:30.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:30.149 Found net devices under 0000:86:00.0: cvl_0_0 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:30.149 Found net devices under 0000:86:00.1: cvl_0_1 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.149 01:26:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.149 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.149 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.149 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:30.149 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:30.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:21:30.410 00:21:30.410 --- 10.0.0.2 ping statistics --- 00:21:30.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.410 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:30.410 00:21:30.410 --- 10.0.0.1 ping statistics --- 00:21:30.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.410 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3449565 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3449565 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3449565 ']' 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.410 01:26:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:30.410 [2024-07-16 01:26:56.292128] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:21:30.410 [2024-07-16 01:26:56.292170] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.410 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.410 [2024-07-16 01:26:56.350702] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:30.669 [2024-07-16 01:26:56.430469] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.669 [2024-07-16 01:26:56.430501] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.669 [2024-07-16 01:26:56.430508] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.669 [2024-07-16 01:26:56.430514] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.669 [2024-07-16 01:26:56.430519] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.669 [2024-07-16 01:26:56.430616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.669 [2024-07-16 01:26:56.430726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.669 [2024-07-16 01:26:56.430762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.669 [2024-07-16 01:26:56.430763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:31.236 [2024-07-16 01:26:57.133052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.236 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:31.236 Malloc1 00:21:31.495 [2024-07-16 01:26:57.228802] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.495 Malloc2 00:21:31.495 Malloc3 00:21:31.495 Malloc4 00:21:31.495 Malloc5 00:21:31.495 Malloc6 00:21:31.495 Malloc7 00:21:31.754 Malloc8 00:21:31.754 Malloc9 00:21:31.754 Malloc10 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3449842 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3449842 /var/tmp/bdevperf.sock 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3449842 ']' 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.754 { 00:21:31.754 "params": { 00:21:31.754 "name": "Nvme$subsystem", 00:21:31.754 "trtype": "$TEST_TRANSPORT", 00:21:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.754 "adrfam": "ipv4", 00:21:31.754 "trsvcid": "$NVMF_PORT", 00:21:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.754 "hdgst": ${hdgst:-false}, 00:21:31.754 "ddgst": ${ddgst:-false} 00:21:31.754 }, 00:21:31.754 "method": "bdev_nvme_attach_controller" 00:21:31.754 } 00:21:31.754 EOF 00:21:31.754 )") 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.754 { 00:21:31.754 "params": { 00:21:31.754 "name": "Nvme$subsystem", 00:21:31.754 "trtype": "$TEST_TRANSPORT", 00:21:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.754 "adrfam": "ipv4", 00:21:31.754 "trsvcid": "$NVMF_PORT", 00:21:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.754 "hdgst": ${hdgst:-false}, 00:21:31.754 "ddgst": ${ddgst:-false} 00:21:31.754 }, 00:21:31.754 "method": "bdev_nvme_attach_controller" 00:21:31.754 } 00:21:31.754 EOF 00:21:31.754 )") 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.754 { 00:21:31.754 "params": { 00:21:31.754 "name": "Nvme$subsystem", 00:21:31.754 "trtype": "$TEST_TRANSPORT", 00:21:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.754 "adrfam": "ipv4", 00:21:31.754 "trsvcid": "$NVMF_PORT", 00:21:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.754 "hdgst": ${hdgst:-false}, 00:21:31.754 "ddgst": ${ddgst:-false} 00:21:31.754 }, 00:21:31.754 "method": "bdev_nvme_attach_controller" 00:21:31.754 } 00:21:31.754 EOF 00:21:31.754 )") 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.754 { 00:21:31.754 "params": { 00:21:31.754 "name": "Nvme$subsystem", 00:21:31.754 "trtype": "$TEST_TRANSPORT", 00:21:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.754 "adrfam": "ipv4", 00:21:31.754 "trsvcid": "$NVMF_PORT", 00:21:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.754 "hdgst": ${hdgst:-false}, 00:21:31.754 "ddgst": ${ddgst:-false} 00:21:31.754 }, 00:21:31.754 "method": "bdev_nvme_attach_controller" 00:21:31.754 } 00:21:31.754 EOF 00:21:31.754 )") 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.754 { 00:21:31.754 "params": { 00:21:31.754 "name": "Nvme$subsystem", 00:21:31.754 "trtype": "$TEST_TRANSPORT", 00:21:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.754 "adrfam": "ipv4", 00:21:31.754 "trsvcid": "$NVMF_PORT", 00:21:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.754 "hdgst": ${hdgst:-false}, 00:21:31.754 "ddgst": ${ddgst:-false} 00:21:31.754 }, 00:21:31.754 "method": "bdev_nvme_attach_controller" 00:21:31.754 } 00:21:31.754 EOF 00:21:31.754 )") 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.754 { 00:21:31.754 "params": { 00:21:31.754 "name": "Nvme$subsystem", 00:21:31.754 "trtype": "$TEST_TRANSPORT", 00:21:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.754 "adrfam": "ipv4", 00:21:31.754 "trsvcid": "$NVMF_PORT", 00:21:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.754 "hdgst": ${hdgst:-false}, 00:21:31.754 "ddgst": ${ddgst:-false} 00:21:31.754 }, 00:21:31.754 "method": "bdev_nvme_attach_controller" 00:21:31.754 } 00:21:31.754 EOF 00:21:31.754 )") 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.754 { 00:21:31.754 "params": { 00:21:31.754 "name": "Nvme$subsystem", 00:21:31.754 "trtype": "$TEST_TRANSPORT", 00:21:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.754 "adrfam": "ipv4", 00:21:31.754 "trsvcid": "$NVMF_PORT", 00:21:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.754 "hdgst": ${hdgst:-false}, 00:21:31.754 "ddgst": ${ddgst:-false} 00:21:31.754 }, 00:21:31.754 "method": "bdev_nvme_attach_controller" 00:21:31.754 } 00:21:31.754 EOF 00:21:31.754 )") 00:21:31.754 [2024-07-16 01:26:57.699366] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:21:31.754 [2024-07-16 01:26:57.699414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449842 ] 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.754 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.754 { 00:21:31.754 "params": { 00:21:31.754 "name": "Nvme$subsystem", 00:21:31.754 "trtype": "$TEST_TRANSPORT", 00:21:31.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.754 "adrfam": "ipv4", 00:21:31.754 "trsvcid": "$NVMF_PORT", 00:21:31.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.754 "hdgst": ${hdgst:-false}, 00:21:31.754 "ddgst": ${ddgst:-false} 00:21:31.754 }, 00:21:31.754 "method": "bdev_nvme_attach_controller" 00:21:31.754 } 00:21:31.754 EOF 00:21:31.754 )") 00:21:31.755 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:31.755 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.755 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.755 { 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme$subsystem", 00:21:31.755 "trtype": "$TEST_TRANSPORT", 00:21:31.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "$NVMF_PORT", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.755 "hdgst": ${hdgst:-false}, 00:21:31.755 "ddgst": ${ddgst:-false} 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 } 00:21:31.755 EOF 00:21:31.755 )") 00:21:31.755 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:31.755 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.755 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.755 { 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme$subsystem", 00:21:31.755 "trtype": "$TEST_TRANSPORT", 00:21:31.755 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "$NVMF_PORT", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.755 "hdgst": ${hdgst:-false}, 00:21:31.755 "ddgst": ${ddgst:-false} 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 } 00:21:31.755 EOF 00:21:31.755 )") 00:21:31.755 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:31.755 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.755 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:31.755 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:31.755 01:26:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme1", 00:21:31.755 "trtype": "tcp", 00:21:31.755 "traddr": "10.0.0.2", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "4420", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:31.755 "hdgst": false, 00:21:31.755 "ddgst": false 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 },{ 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme2", 00:21:31.755 "trtype": "tcp", 00:21:31.755 "traddr": "10.0.0.2", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "4420", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:31.755 "hdgst": false, 00:21:31.755 "ddgst": false 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 },{ 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme3", 00:21:31.755 "trtype": "tcp", 00:21:31.755 "traddr": "10.0.0.2", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "4420", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:31.755 "hdgst": false, 00:21:31.755 "ddgst": false 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 },{ 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme4", 00:21:31.755 "trtype": "tcp", 00:21:31.755 "traddr": "10.0.0.2", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "4420", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:31.755 "hdgst": false, 00:21:31.755 "ddgst": false 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 },{ 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme5", 00:21:31.755 "trtype": "tcp", 00:21:31.755 "traddr": "10.0.0.2", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "4420", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:31.755 "hdgst": false, 00:21:31.755 "ddgst": false 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 },{ 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme6", 00:21:31.755 "trtype": "tcp", 00:21:31.755 "traddr": "10.0.0.2", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "4420", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:31.755 "hdgst": false, 00:21:31.755 "ddgst": false 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 },{ 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme7", 00:21:31.755 "trtype": "tcp", 00:21:31.755 "traddr": "10.0.0.2", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "4420", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:31.755 "hdgst": false, 00:21:31.755 "ddgst": false 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 },{ 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme8", 00:21:31.755 "trtype": "tcp", 00:21:31.755 "traddr": "10.0.0.2", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "4420", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:31.755 "hdgst": false, 00:21:31.755 "ddgst": false 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 },{ 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme9", 00:21:31.755 "trtype": "tcp", 00:21:31.755 "traddr": "10.0.0.2", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "4420", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:31.755 "hdgst": false, 00:21:31.755 "ddgst": false 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 },{ 00:21:31.755 "params": { 00:21:31.755 "name": "Nvme10", 00:21:31.755 "trtype": "tcp", 00:21:31.755 "traddr": "10.0.0.2", 00:21:31.755 "adrfam": "ipv4", 00:21:31.755 "trsvcid": "4420", 00:21:31.755 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:31.755 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:31.755 "hdgst": false, 00:21:31.755 "ddgst": false 00:21:31.755 }, 00:21:31.755 "method": "bdev_nvme_attach_controller" 00:21:31.755 }' 00:21:32.014 [2024-07-16 01:26:57.757170] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.014 [2024-07-16 01:26:57.829139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.923 Running I/O for 10 seconds... 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:33.923 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:34.196 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3449565 00:21:34.197 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3449565 ']' 00:21:34.197 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3449565 00:21:34.197 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:34.197 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.197 01:26:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3449565 00:21:34.197 01:27:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:34.197 01:27:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:34.197 01:27:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3449565' 00:21:34.197 killing process with pid 3449565 00:21:34.197 01:27:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3449565 00:21:34.197 01:27:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3449565 00:21:34.197 [2024-07-16 01:27:00.029500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.029944] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221e90 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.030966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.030994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.197 [2024-07-16 01:27:00.031098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031177] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031224] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031259] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.031373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1224ad0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.198 [2024-07-16 01:27:00.032150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.198 [2024-07-16 01:27:00.032159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.198 [2024-07-16 01:27:00.032166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.198 [2024-07-16 01:27:00.032177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.198 [2024-07-16 01:27:00.032184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.198 [2024-07-16 01:27:00.032191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.198 [2024-07-16 01:27:00.032198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.198 [2024-07-16 01:27:00.032204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c94a0 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.198 [2024-07-16 01:27:00.032251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.198 [2024-07-16 01:27:00.032258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.198 [2024-07-16 01:27:00.032264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.198 [2024-07-16 01:27:00.032271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.198 [2024-07-16 01:27:00.032278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.198 [2024-07-16 01:27:00.032285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.198 [2024-07-16 01:27:00.032292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.198 [2024-07-16 01:27:00.032298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554d60 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032474] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.198 [2024-07-16 01:27:00.032485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032635] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032647] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032681] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032687] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.032704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222370 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035203] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.035997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036150] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.199 [2024-07-16 01:27:00.036289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.036343] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.036360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.036379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.036395] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.036419] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.036439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.036465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.036484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.036503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.036537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222850 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037979] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.037993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038110] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038132] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222d50 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223230 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223230 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.038917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223230 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.200 [2024-07-16 01:27:00.039569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039618] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039629] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039727] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039762] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039809] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.039860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223730 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.201 [2024-07-16 01:27:00.040956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.040962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.040972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.040978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.040983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.040989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.040994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041131] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041137] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1223c10 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041883] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12240f0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12240f0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12240f0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12240f0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12240f0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12240f0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.041928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12240f0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042221] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042234] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042355] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042362] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042417] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042425] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.202 [2024-07-16 01:27:00.042522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.042528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.042534] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.042540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.042545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.042551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.042556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.042561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.042567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.042573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.042579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.042584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12245d0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.050368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720ac0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.050479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580fc0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.050555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a34e0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.050631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c2080 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.050713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1621050 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.050784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c94a0 (9): Bad file descriptor 00:21:34.203 [2024-07-16 01:27:00.050810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580ba0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.050878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554d60 (9): Bad file descriptor 00:21:34.203 [2024-07-16 01:27:00.050900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161aa10 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.050978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.050986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.050993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.051000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.051006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.051013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.051020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.203 [2024-07-16 01:27:00.051026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.203 [2024-07-16 01:27:00.051032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17200a0 is same with the state(5) to be set 00:21:34.203 [2024-07-16 01:27:00.051813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.203 [2024-07-16 01:27:00.051837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.051852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.051859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.051868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.051875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.051883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.051890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.051898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.051905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.051917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.051923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.051931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.051938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.051946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.051953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.051961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.051968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.051976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.051983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.051991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.051998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.204 [2024-07-16 01:27:00.052476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.204 [2024-07-16 01:27:00.052484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:34.205 [2024-07-16 01:27:00.052860] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e82a10 was disconnected and freed. reset controller. 00:21:34.205 [2024-07-16 01:27:00.052888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.052986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.052994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.053001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.053016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.053023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.053030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.053038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.053045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.053052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.053058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.053066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.053073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.053081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.053087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.053095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.205 [2024-07-16 01:27:00.053101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.205 [2024-07-16 01:27:00.053111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.206 [2024-07-16 01:27:00.053720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.206 [2024-07-16 01:27:00.053726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.053735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.053741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.053749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.053756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.053764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.053771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.053778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.053785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.053793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.053799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.053807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.053813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.053873] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x202a4f0 was disconnected and freed. reset controller. 00:21:34.207 [2024-07-16 01:27:00.054207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.054491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.054499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.058929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.058940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.058948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.058957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.058963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.058971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.058978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.058986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.058992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.207 [2024-07-16 01:27:00.059176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.207 [2024-07-16 01:27:00.059182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.059592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.059657] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1550470 was disconnected and freed. reset controller. 00:21:34.208 [2024-07-16 01:27:00.061573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:34.208 [2024-07-16 01:27:00.061605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161aa10 (9): Bad file descriptor 00:21:34.208 [2024-07-16 01:27:00.061634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720ac0 (9): Bad file descriptor 00:21:34.208 [2024-07-16 01:27:00.061649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580fc0 (9): Bad file descriptor 00:21:34.208 [2024-07-16 01:27:00.061663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a34e0 (9): Bad file descriptor 00:21:34.208 [2024-07-16 01:27:00.061677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c2080 (9): Bad file descriptor 00:21:34.208 [2024-07-16 01:27:00.061692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1621050 (9): Bad file descriptor 00:21:34.208 [2024-07-16 01:27:00.061707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580ba0 (9): Bad file descriptor 00:21:34.208 [2024-07-16 01:27:00.061726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17200a0 (9): Bad file descriptor 00:21:34.208 [2024-07-16 01:27:00.062979] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:34.208 [2024-07-16 01:27:00.063017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:34.208 [2024-07-16 01:27:00.063091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.063101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.063114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.063121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.063131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.063139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.063148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.063155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.063164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.063171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.063180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.063187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.063195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.063208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.063217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.063224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.208 [2024-07-16 01:27:00.063232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.208 [2024-07-16 01:27:00.063239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.209 [2024-07-16 01:27:00.063810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.209 [2024-07-16 01:27:00.063816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.063989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.063997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.064004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.064012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.064018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.064026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.064033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.064041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.064047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.064055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.064061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.064069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.064076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.065760] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:34.210 [2024-07-16 01:27:00.065827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.065838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.065851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.065858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.065867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.065875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.065884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.065891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.065900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.065910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.065919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.065926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.065934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.065941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.065950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.065957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.065965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.065972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.065981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.065987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.065996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.210 [2024-07-16 01:27:00.066197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.210 [2024-07-16 01:27:00.066203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.211 [2024-07-16 01:27:00.066828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.211 [2024-07-16 01:27:00.066835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15feab0 is same with the state(5) to be set 00:21:34.211 [2024-07-16 01:27:00.068196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:34.211 [2024-07-16 01:27:00.068216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.211 [2024-07-16 01:27:00.068225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:34.211 [2024-07-16 01:27:00.068419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.211 [2024-07-16 01:27:00.068434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161aa10 with addr=10.0.0.2, port=4420 00:21:34.211 [2024-07-16 01:27:00.068442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161aa10 is same with the state(5) to be set 00:21:34.212 [2024-07-16 01:27:00.068548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.212 [2024-07-16 01:27:00.068561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1621050 with addr=10.0.0.2, port=4420 00:21:34.212 [2024-07-16 01:27:00.068568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1621050 is same with the state(5) to be set 00:21:34.212 [2024-07-16 01:27:00.068639] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:34.212 [2024-07-16 01:27:00.068686] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:34.212 [2024-07-16 01:27:00.068729] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:34.212 [2024-07-16 01:27:00.068990] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:34.212 [2024-07-16 01:27:00.069188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.212 [2024-07-16 01:27:00.069202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17200a0 with addr=10.0.0.2, port=4420 00:21:34.212 [2024-07-16 01:27:00.069211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17200a0 is same with the state(5) to be set 00:21:34.212 [2024-07-16 01:27:00.069290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.212 [2024-07-16 01:27:00.069301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554d60 with addr=10.0.0.2, port=4420 00:21:34.212 [2024-07-16 01:27:00.069308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554d60 is same with the state(5) to be set 00:21:34.212 [2024-07-16 01:27:00.069452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.212 [2024-07-16 01:27:00.069462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c94a0 with addr=10.0.0.2, port=4420 00:21:34.212 [2024-07-16 01:27:00.069469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c94a0 is same with the state(5) to be set 00:21:34.212 [2024-07-16 01:27:00.069480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161aa10 (9): Bad file descriptor 00:21:34.212 [2024-07-16 01:27:00.069490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1621050 (9): Bad file descriptor 00:21:34.212 [2024-07-16 01:27:00.070014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17200a0 (9): Bad file descriptor 00:21:34.212 [2024-07-16 01:27:00.070028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554d60 (9): Bad file descriptor 00:21:34.212 [2024-07-16 01:27:00.070036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c94a0 (9): Bad file descriptor 00:21:34.212 [2024-07-16 01:27:00.070044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:34.212 [2024-07-16 01:27:00.070051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:34.212 [2024-07-16 01:27:00.070059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:34.212 [2024-07-16 01:27:00.070072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:34.212 [2024-07-16 01:27:00.070078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:34.212 [2024-07-16 01:27:00.070085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:34.212 [2024-07-16 01:27:00.070136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.212 [2024-07-16 01:27:00.070144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.212 [2024-07-16 01:27:00.070150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:34.212 [2024-07-16 01:27:00.070156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:34.212 [2024-07-16 01:27:00.070162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:34.212 [2024-07-16 01:27:00.070176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.212 [2024-07-16 01:27:00.070182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.212 [2024-07-16 01:27:00.070188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.212 [2024-07-16 01:27:00.070198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:34.212 [2024-07-16 01:27:00.070204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:34.212 [2024-07-16 01:27:00.070210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:34.212 [2024-07-16 01:27:00.070248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.212 [2024-07-16 01:27:00.070255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.212 [2024-07-16 01:27:00.070261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.212 [2024-07-16 01:27:00.071701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.071991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.071998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.072006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.072013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.072022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.072029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.072037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.072044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.072053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.072062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.072071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.072077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.072086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.072092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.212 [2024-07-16 01:27:00.072101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.212 [2024-07-16 01:27:00.072108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.213 [2024-07-16 01:27:00.072686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.213 [2024-07-16 01:27:00.072693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.072701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.072708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.072717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.072725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.072733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a41a0 is same with the state(5) to be set 00:21:34.214 [2024-07-16 01:27:00.073774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.073990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.073997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.214 [2024-07-16 01:27:00.074401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.214 [2024-07-16 01:27:00.074408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.074784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.074792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a5500 is same with the state(5) to be set 00:21:34.215 [2024-07-16 01:27:00.075813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.075838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.075854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.075868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.075884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.075898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.075914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.075929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.075944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.075959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.075973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.075988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.075995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.076003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.076010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.076017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.076024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.076032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.076039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.076046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.076053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.076061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.076068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.215 [2024-07-16 01:27:00.076076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.215 [2024-07-16 01:27:00.076083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.216 [2024-07-16 01:27:00.076725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.216 [2024-07-16 01:27:00.076732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.076740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.076747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.076755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.076762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.076770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.076777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.076783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x154efa0 is same with the state(5) to be set 00:21:34.217 [2024-07-16 01:27:00.077739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.077990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.077996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.217 [2024-07-16 01:27:00.078231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.217 [2024-07-16 01:27:00.078239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.078693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.078700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdaff0 is same with the state(5) to be set 00:21:34.218 [2024-07-16 01:27:00.079679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.079694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.079705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.079713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.079722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.079729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.079737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.079744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.079752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.079759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.079767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.079774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.079782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.079789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.079797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.079804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.079812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.079819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.079827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.079833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.079842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.218 [2024-07-16 01:27:00.079849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.218 [2024-07-16 01:27:00.079857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.079864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.079872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.079879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.079887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.079893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.079901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.079910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.079918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.079925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.079933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.079940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.079948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.079955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.079962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.079969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.079977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.079984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.079992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.079998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.219 [2024-07-16 01:27:00.080490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.219 [2024-07-16 01:27:00.080496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.220 [2024-07-16 01:27:00.080504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.220 [2024-07-16 01:27:00.080511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.220 [2024-07-16 01:27:00.080519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.220 [2024-07-16 01:27:00.080525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.220 [2024-07-16 01:27:00.080534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.220 [2024-07-16 01:27:00.080540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.220 [2024-07-16 01:27:00.080548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.220 [2024-07-16 01:27:00.080554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.220 [2024-07-16 01:27:00.080562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.220 [2024-07-16 01:27:00.080569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.220 [2024-07-16 01:27:00.080577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.220 [2024-07-16 01:27:00.080583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.220 [2024-07-16 01:27:00.080591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.220 [2024-07-16 01:27:00.080597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.220 [2024-07-16 01:27:00.080605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.220 [2024-07-16 01:27:00.080611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.220 [2024-07-16 01:27:00.080619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.220 [2024-07-16 01:27:00.080626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.220 [2024-07-16 01:27:00.080634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.220 [2024-07-16 01:27:00.080640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.220 [2024-07-16 01:27:00.080647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fd5c0 is same with the state(5) to be set 00:21:34.220 [2024-07-16 01:27:00.085093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:34.220 [2024-07-16 01:27:00.085120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:34.220 [2024-07-16 01:27:00.085128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:34.220 [2024-07-16 01:27:00.085137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:34.220 [2024-07-16 01:27:00.085217] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:34.220 task offset: 16384 on job bdev=Nvme7n1 fails 00:21:34.220 00:21:34.220 Latency(us) 00:21:34.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.220 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.220 Job: Nvme1n1 ended in about 0.59 seconds with error 00:21:34.220 Verification LBA range: start 0x0 length 0x400 00:21:34.220 Nvme1n1 : 0.59 215.82 13.49 107.91 0.00 194872.08 25215.76 195734.19 00:21:34.220 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.220 Job: Nvme2n1 ended in about 0.60 seconds with error 00:21:34.220 Verification LBA range: start 0x0 length 0x400 00:21:34.220 Nvme2n1 : 0.60 212.72 13.29 106.36 0.00 192675.84 15978.30 191739.61 00:21:34.220 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.220 Job: Nvme3n1 ended in about 0.60 seconds with error 00:21:34.220 Verification LBA range: start 0x0 length 0x400 00:21:34.220 Nvme3n1 : 0.60 212.00 13.25 106.00 0.00 188211.53 16477.62 189742.32 00:21:34.220 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.220 Job: Nvme4n1 ended in about 0.61 seconds with error 00:21:34.220 Verification LBA range: start 0x0 length 0x400 00:21:34.220 Nvme4n1 : 0.61 211.31 13.21 105.65 0.00 183772.73 15416.56 212711.13 00:21:34.220 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.220 Job: Nvme5n1 ended in about 0.59 seconds with error 00:21:34.220 Verification LBA range: start 0x0 length 0x400 00:21:34.220 Nvme5n1 : 0.59 216.60 13.54 108.30 0.00 173699.98 10610.59 211712.49 00:21:34.220 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.220 Job: Nvme6n1 ended in about 0.61 seconds with error 00:21:34.220 Verification LBA range: start 0x0 length 0x400 00:21:34.220 Nvme6n1 : 0.61 105.32 6.58 105.32 0.00 261339.92 16227.96 234681.30 00:21:34.220 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.220 Job: Nvme7n1 ended in about 0.59 seconds with error 00:21:34.220 Verification LBA range: start 0x0 length 0x400 00:21:34.220 Nvme7n1 : 0.59 217.42 13.59 108.71 0.00 162710.67 25465.42 190740.97 00:21:34.220 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.220 Job: Nvme8n1 ended in about 0.59 seconds with error 00:21:34.220 Verification LBA range: start 0x0 length 0x400 00:21:34.220 Nvme8n1 : 0.59 217.11 13.57 108.55 0.00 157998.49 10735.42 198730.12 00:21:34.220 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.220 Job: Nvme9n1 ended in about 0.61 seconds with error 00:21:34.220 Verification LBA range: start 0x0 length 0x400 00:21:34.220 Nvme9n1 : 0.61 104.98 6.56 104.98 0.00 239610.64 16352.79 214708.42 00:21:34.220 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.220 Job: Nvme10n1 ended in about 0.60 seconds with error 00:21:34.220 Verification LBA range: start 0x0 length 0x400 00:21:34.220 Nvme10n1 : 0.60 107.41 6.71 107.41 0.00 225240.99 16477.62 233682.65 00:21:34.220 =================================================================================================================== 00:21:34.220 Total : 1820.69 113.79 1069.20 0.00 193118.78 10610.59 234681.30 00:21:34.220 [2024-07-16 01:27:00.109576] app.c:1058:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:34.220 [2024-07-16 01:27:00.109623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:34.220 [2024-07-16 01:27:00.109859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.220 [2024-07-16 01:27:00.109876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1720ac0 with addr=10.0.0.2, port=4420 00:21:34.220 [2024-07-16 01:27:00.109886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720ac0 is same with the state(5) to be set 00:21:34.220 [2024-07-16 01:27:00.110031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.220 [2024-07-16 01:27:00.110040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1580ba0 with addr=10.0.0.2, port=4420 00:21:34.220 [2024-07-16 01:27:00.110047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580ba0 is same with the state(5) to be set 00:21:34.220 [2024-07-16 01:27:00.110213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.220 [2024-07-16 01:27:00.110224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1580fc0 with addr=10.0.0.2, port=4420 00:21:34.220 [2024-07-16 01:27:00.110230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1580fc0 is same with the state(5) to be set 00:21:34.220 [2024-07-16 01:27:00.110318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.220 [2024-07-16 01:27:00.110328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10a34e0 with addr=10.0.0.2, port=4420 00:21:34.220 [2024-07-16 01:27:00.110335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a34e0 is same with the state(5) to be set 00:21:34.220 [2024-07-16 01:27:00.111473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:34.220 [2024-07-16 01:27:00.111487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:34.220 [2024-07-16 01:27:00.111495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:34.220 [2024-07-16 01:27:00.111503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.220 [2024-07-16 01:27:00.111512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:34.220 [2024-07-16 01:27:00.111667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.220 [2024-07-16 01:27:00.111679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c2080 with addr=10.0.0.2, port=4420 00:21:34.220 [2024-07-16 01:27:00.111686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c2080 is same with the state(5) to be set 00:21:34.220 [2024-07-16 01:27:00.111698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720ac0 (9): Bad file descriptor 00:21:34.220 [2024-07-16 01:27:00.111709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580ba0 (9): Bad file descriptor 00:21:34.220 [2024-07-16 01:27:00.111717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1580fc0 (9): Bad file descriptor 00:21:34.220 [2024-07-16 01:27:00.111725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a34e0 (9): Bad file descriptor 00:21:34.220 [2024-07-16 01:27:00.111756] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:34.220 [2024-07-16 01:27:00.111765] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:34.220 [2024-07-16 01:27:00.111775] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:34.220 [2024-07-16 01:27:00.111784] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:34.220 [2024-07-16 01:27:00.112227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.220 [2024-07-16 01:27:00.112242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1621050 with addr=10.0.0.2, port=4420 00:21:34.220 [2024-07-16 01:27:00.112250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1621050 is same with the state(5) to be set 00:21:34.220 [2024-07-16 01:27:00.112472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.220 [2024-07-16 01:27:00.112483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161aa10 with addr=10.0.0.2, port=4420 00:21:34.220 [2024-07-16 01:27:00.112489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x161aa10 is same with the state(5) to be set 00:21:34.220 [2024-07-16 01:27:00.112641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.220 [2024-07-16 01:27:00.112651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16c94a0 with addr=10.0.0.2, port=4420 00:21:34.220 [2024-07-16 01:27:00.112658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c94a0 is same with the state(5) to be set 00:21:34.221 [2024-07-16 01:27:00.112741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.221 [2024-07-16 01:27:00.112751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1554d60 with addr=10.0.0.2, port=4420 00:21:34.221 [2024-07-16 01:27:00.112757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554d60 is same with the state(5) to be set 00:21:34.221 [2024-07-16 01:27:00.112939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:34.221 [2024-07-16 01:27:00.112949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17200a0 with addr=10.0.0.2, port=4420 00:21:34.221 [2024-07-16 01:27:00.112956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17200a0 is same with the state(5) to be set 00:21:34.221 [2024-07-16 01:27:00.112965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c2080 (9): Bad file descriptor 00:21:34.221 [2024-07-16 01:27:00.112975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:34.221 [2024-07-16 01:27:00.112981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:34.221 [2024-07-16 01:27:00.112988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:34.221 [2024-07-16 01:27:00.113000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:34.221 [2024-07-16 01:27:00.113006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:34.221 [2024-07-16 01:27:00.113012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:34.221 [2024-07-16 01:27:00.113021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:34.221 [2024-07-16 01:27:00.113026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:34.221 [2024-07-16 01:27:00.113033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:34.221 [2024-07-16 01:27:00.113042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:34.221 [2024-07-16 01:27:00.113048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:34.221 [2024-07-16 01:27:00.113053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:34.221 [2024-07-16 01:27:00.113121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.221 [2024-07-16 01:27:00.113129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.221 [2024-07-16 01:27:00.113138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.221 [2024-07-16 01:27:00.113144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.221 [2024-07-16 01:27:00.113151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1621050 (9): Bad file descriptor 00:21:34.221 [2024-07-16 01:27:00.113159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161aa10 (9): Bad file descriptor 00:21:34.221 [2024-07-16 01:27:00.113167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c94a0 (9): Bad file descriptor 00:21:34.221 [2024-07-16 01:27:00.113175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1554d60 (9): Bad file descriptor 00:21:34.221 [2024-07-16 01:27:00.113183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17200a0 (9): Bad file descriptor 00:21:34.221 [2024-07-16 01:27:00.113189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:34.221 [2024-07-16 01:27:00.113195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:34.221 [2024-07-16 01:27:00.113200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:34.221 [2024-07-16 01:27:00.113224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.221 [2024-07-16 01:27:00.113230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:34.221 [2024-07-16 01:27:00.113236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:34.221 [2024-07-16 01:27:00.113241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:34.221 [2024-07-16 01:27:00.113250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:34.221 [2024-07-16 01:27:00.113255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:34.221 [2024-07-16 01:27:00.113261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:34.221 [2024-07-16 01:27:00.113269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:34.221 [2024-07-16 01:27:00.113275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:34.221 [2024-07-16 01:27:00.113280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:34.221 [2024-07-16 01:27:00.113288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.221 [2024-07-16 01:27:00.113294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:34.221 [2024-07-16 01:27:00.113300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.221 [2024-07-16 01:27:00.113307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:34.221 [2024-07-16 01:27:00.113313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:34.221 [2024-07-16 01:27:00.113319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:34.221 [2024-07-16 01:27:00.113348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.221 [2024-07-16 01:27:00.113355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.221 [2024-07-16 01:27:00.113360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.221 [2024-07-16 01:27:00.113365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.221 [2024-07-16 01:27:00.113370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:34.480 01:27:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:34.480 01:27:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3449842 00:21:35.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3449842) - No such process 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:35.876 rmmod nvme_tcp 00:21:35.876 rmmod nvme_fabrics 00:21:35.876 rmmod nvme_keyring 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.876 01:27:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.794 01:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.794 00:21:37.794 real 0m7.649s 00:21:37.794 user 0m18.583s 00:21:37.794 sys 0m1.165s 00:21:37.794 01:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:37.794 01:27:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:37.794 ************************************ 00:21:37.794 END TEST nvmf_shutdown_tc3 00:21:37.794 ************************************ 00:21:37.794 01:27:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:37.794 01:27:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:37.794 00:21:37.794 real 0m30.641s 00:21:37.794 user 1m17.569s 00:21:37.794 sys 0m7.842s 00:21:37.794 01:27:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:37.794 01:27:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:37.794 ************************************ 00:21:37.794 END TEST nvmf_shutdown 00:21:37.794 ************************************ 00:21:37.794 01:27:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:37.794 01:27:03 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:37.794 01:27:03 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:37.794 01:27:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.794 01:27:03 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:37.794 01:27:03 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:37.794 01:27:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.794 01:27:03 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:37.794 01:27:03 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:37.794 01:27:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:37.794 01:27:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:37.794 01:27:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.794 ************************************ 00:21:37.794 START TEST nvmf_multicontroller 00:21:37.794 ************************************ 00:21:37.794 01:27:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:38.052 * Looking for test storage... 00:21:38.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:38.052 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:38.053 01:27:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:43.320 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:43.320 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:43.320 Found net devices under 0000:86:00.0: cvl_0_0 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:43.320 Found net devices under 0000:86:00.1: cvl_0_1 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:43.320 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:43.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:21:43.579 00:21:43.579 --- 10.0.0.2 ping statistics --- 00:21:43.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.579 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:21:43.579 00:21:43.579 --- 10.0.0.1 ping statistics --- 00:21:43.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.579 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3454268 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3454268 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3454268 ']' 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.579 01:27:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.579 [2024-07-16 01:27:09.449206] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:21:43.579 [2024-07-16 01:27:09.449245] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.579 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.579 [2024-07-16 01:27:09.509385] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:43.837 [2024-07-16 01:27:09.588029] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.837 [2024-07-16 01:27:09.588066] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.837 [2024-07-16 01:27:09.588073] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.837 [2024-07-16 01:27:09.588079] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.837 [2024-07-16 01:27:09.588084] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.837 [2024-07-16 01:27:09.588180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.837 [2024-07-16 01:27:09.588249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.837 [2024-07-16 01:27:09.588250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.411 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.411 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:44.411 01:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.411 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:44.411 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.411 01:27:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.411 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.412 [2024-07-16 01:27:10.292715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.412 Malloc0 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.412 [2024-07-16 01:27:10.354427] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.412 [2024-07-16 01:27:10.362344] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.412 Malloc1 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.412 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3454634 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3454634 /var/tmp/bdevperf.sock 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3454634 ']' 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:44.669 01:27:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.597 NVMe0n1 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.597 1 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.597 request: 00:21:45.597 { 00:21:45.597 "name": "NVMe0", 00:21:45.597 "trtype": "tcp", 00:21:45.597 "traddr": "10.0.0.2", 00:21:45.597 "adrfam": "ipv4", 00:21:45.597 "trsvcid": "4420", 00:21:45.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.597 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:45.597 "hostaddr": "10.0.0.2", 00:21:45.597 "hostsvcid": "60000", 00:21:45.597 "prchk_reftag": false, 00:21:45.597 "prchk_guard": false, 00:21:45.597 "hdgst": false, 00:21:45.597 "ddgst": false, 00:21:45.597 "method": "bdev_nvme_attach_controller", 00:21:45.597 "req_id": 1 00:21:45.597 } 00:21:45.597 Got JSON-RPC error response 00:21:45.597 response: 00:21:45.597 { 00:21:45.597 "code": -114, 00:21:45.597 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:45.597 } 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.597 request: 00:21:45.597 { 00:21:45.597 "name": "NVMe0", 00:21:45.597 "trtype": "tcp", 00:21:45.597 "traddr": "10.0.0.2", 00:21:45.597 "adrfam": "ipv4", 00:21:45.597 "trsvcid": "4420", 00:21:45.597 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:45.597 "hostaddr": "10.0.0.2", 00:21:45.597 "hostsvcid": "60000", 00:21:45.597 "prchk_reftag": false, 00:21:45.597 "prchk_guard": false, 00:21:45.597 "hdgst": false, 00:21:45.597 "ddgst": false, 00:21:45.597 "method": "bdev_nvme_attach_controller", 00:21:45.597 "req_id": 1 00:21:45.597 } 00:21:45.597 Got JSON-RPC error response 00:21:45.597 response: 00:21:45.597 { 00:21:45.597 "code": -114, 00:21:45.597 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:45.597 } 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.597 request: 00:21:45.597 { 00:21:45.597 "name": "NVMe0", 00:21:45.597 "trtype": "tcp", 00:21:45.597 "traddr": "10.0.0.2", 00:21:45.597 "adrfam": "ipv4", 00:21:45.597 "trsvcid": "4420", 00:21:45.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.597 "hostaddr": "10.0.0.2", 00:21:45.597 "hostsvcid": "60000", 00:21:45.597 "prchk_reftag": false, 00:21:45.597 "prchk_guard": false, 00:21:45.597 "hdgst": false, 00:21:45.597 "ddgst": false, 00:21:45.597 "multipath": "disable", 00:21:45.597 "method": "bdev_nvme_attach_controller", 00:21:45.597 "req_id": 1 00:21:45.597 } 00:21:45.597 Got JSON-RPC error response 00:21:45.597 response: 00:21:45.597 { 00:21:45.597 "code": -114, 00:21:45.597 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:45.597 } 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.597 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.597 request: 00:21:45.597 { 00:21:45.597 "name": "NVMe0", 00:21:45.597 "trtype": "tcp", 00:21:45.597 "traddr": "10.0.0.2", 00:21:45.597 "adrfam": "ipv4", 00:21:45.598 "trsvcid": "4420", 00:21:45.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.598 "hostaddr": "10.0.0.2", 00:21:45.598 "hostsvcid": "60000", 00:21:45.598 "prchk_reftag": false, 00:21:45.598 "prchk_guard": false, 00:21:45.598 "hdgst": false, 00:21:45.598 "ddgst": false, 00:21:45.598 "multipath": "failover", 00:21:45.598 "method": "bdev_nvme_attach_controller", 00:21:45.598 "req_id": 1 00:21:45.598 } 00:21:45.598 Got JSON-RPC error response 00:21:45.598 response: 00:21:45.598 { 00:21:45.598 "code": -114, 00:21:45.598 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:45.598 } 00:21:45.598 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:45.598 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:45.598 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:45.598 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:45.598 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:45.598 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.598 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.598 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.853 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.853 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:45.853 01:27:11 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:47.216 0 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3454634 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3454634 ']' 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3454634 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3454634 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3454634' 00:21:47.216 killing process with pid 3454634 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3454634 00:21:47.216 01:27:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3454634 00:21:47.216 01:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:47.217 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:47.217 [2024-07-16 01:27:10.453884] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:21:47.217 [2024-07-16 01:27:10.453930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454634 ] 00:21:47.217 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.217 [2024-07-16 01:27:10.510916] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.217 [2024-07-16 01:27:10.588768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.217 [2024-07-16 01:27:11.761725] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 7cc8edf1-f4d3-4d3a-b1bb-4ad4446c1932 already exists 00:21:47.217 [2024-07-16 01:27:11.761755] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:7cc8edf1-f4d3-4d3a-b1bb-4ad4446c1932 alias for bdev NVMe1n1 00:21:47.217 [2024-07-16 01:27:11.761763] bdev_nvme.c:4325:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:47.217 Running I/O for 1 seconds... 00:21:47.217 00:21:47.217 Latency(us) 00:21:47.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.217 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:47.217 NVMe0n1 : 1.00 25154.41 98.26 0.00 0.00 5082.22 3167.57 14168.26 00:21:47.217 =================================================================================================================== 00:21:47.217 Total : 25154.41 98.26 0.00 0.00 5082.22 3167.57 14168.26 00:21:47.217 Received shutdown signal, test time was about 1.000000 seconds 00:21:47.217 00:21:47.217 Latency(us) 00:21:47.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.217 =================================================================================================================== 00:21:47.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.217 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.217 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.217 rmmod nvme_tcp 00:21:47.473 rmmod nvme_fabrics 00:21:47.473 rmmod nvme_keyring 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3454268 ']' 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3454268 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3454268 ']' 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3454268 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3454268 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3454268' 00:21:47.473 killing process with pid 3454268 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3454268 00:21:47.473 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3454268 00:21:47.731 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.731 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.731 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.731 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.731 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.731 01:27:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.731 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.731 01:27:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.628 01:27:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:49.628 00:21:49.628 real 0m11.844s 00:21:49.628 user 0m16.280s 00:21:49.628 sys 0m4.914s 00:21:49.628 01:27:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:49.628 01:27:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:49.628 ************************************ 00:21:49.628 END TEST nvmf_multicontroller 00:21:49.628 ************************************ 00:21:49.885 01:27:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:49.885 01:27:15 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:49.885 01:27:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:49.885 01:27:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.885 01:27:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:49.885 ************************************ 00:21:49.885 START TEST nvmf_aer 00:21:49.885 ************************************ 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:49.885 * Looking for test storage... 00:21:49.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:49.885 01:27:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.136 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:55.137 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:55.137 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:55.137 Found net devices under 0000:86:00.0: cvl_0_0 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:55.137 Found net devices under 0000:86:00.1: cvl_0_1 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.137 01:27:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:21:55.137 00:21:55.137 --- 10.0.0.2 ping statistics --- 00:21:55.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.137 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:21:55.137 00:21:55.137 --- 10.0.0.1 ping statistics --- 00:21:55.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.137 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3458465 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3458465 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3458465 ']' 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:55.137 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.394 [2024-07-16 01:27:21.140629] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:21:55.394 [2024-07-16 01:27:21.140673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.394 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.394 [2024-07-16 01:27:21.200866] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.394 [2024-07-16 01:27:21.274535] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.394 [2024-07-16 01:27:21.274587] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.394 [2024-07-16 01:27:21.274594] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.394 [2024-07-16 01:27:21.274600] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.394 [2024-07-16 01:27:21.274604] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.394 [2024-07-16 01:27:21.274651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.394 [2024-07-16 01:27:21.274750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.394 [2024-07-16 01:27:21.274836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.394 [2024-07-16 01:27:21.274837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.959 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.959 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:55.959 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.959 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:55.959 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.238 01:27:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.238 01:27:21 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:56.238 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.238 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.238 [2024-07-16 01:27:21.978328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.238 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.238 01:27:21 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:56.238 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.238 01:27:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.238 Malloc0 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.238 [2024-07-16 01:27:22.029739] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.238 [ 00:21:56.238 { 00:21:56.238 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:56.238 "subtype": "Discovery", 00:21:56.238 "listen_addresses": [], 00:21:56.238 "allow_any_host": true, 00:21:56.238 "hosts": [] 00:21:56.238 }, 00:21:56.238 { 00:21:56.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.238 "subtype": "NVMe", 00:21:56.238 "listen_addresses": [ 00:21:56.238 { 00:21:56.238 "trtype": "TCP", 00:21:56.238 "adrfam": "IPv4", 00:21:56.238 "traddr": "10.0.0.2", 00:21:56.238 "trsvcid": "4420" 00:21:56.238 } 00:21:56.238 ], 00:21:56.238 "allow_any_host": true, 00:21:56.238 "hosts": [], 00:21:56.238 "serial_number": "SPDK00000000000001", 00:21:56.238 "model_number": "SPDK bdev Controller", 00:21:56.238 "max_namespaces": 2, 00:21:56.238 "min_cntlid": 1, 00:21:56.238 "max_cntlid": 65519, 00:21:56.238 "namespaces": [ 00:21:56.238 { 00:21:56.238 "nsid": 1, 00:21:56.238 "bdev_name": "Malloc0", 00:21:56.238 "name": "Malloc0", 00:21:56.238 "nguid": "C42E48C12B8B41538AE587EF1F6B2ED7", 00:21:56.238 "uuid": "c42e48c1-2b8b-4153-8ae5-87ef1f6b2ed7" 00:21:56.238 } 00:21:56.238 ] 00:21:56.238 } 00:21:56.238 ] 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3458658 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:56.238 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:56.238 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.512 Malloc1 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.512 Asynchronous Event Request test 00:21:56.512 Attaching to 10.0.0.2 00:21:56.512 Attached to 10.0.0.2 00:21:56.512 Registering asynchronous event callbacks... 00:21:56.512 Starting namespace attribute notice tests for all controllers... 00:21:56.512 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:56.512 aer_cb - Changed Namespace 00:21:56.512 Cleaning up... 00:21:56.512 [ 00:21:56.512 { 00:21:56.512 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:56.512 "subtype": "Discovery", 00:21:56.512 "listen_addresses": [], 00:21:56.512 "allow_any_host": true, 00:21:56.512 "hosts": [] 00:21:56.512 }, 00:21:56.512 { 00:21:56.512 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.512 "subtype": "NVMe", 00:21:56.512 "listen_addresses": [ 00:21:56.512 { 00:21:56.512 "trtype": "TCP", 00:21:56.512 "adrfam": "IPv4", 00:21:56.512 "traddr": "10.0.0.2", 00:21:56.512 "trsvcid": "4420" 00:21:56.512 } 00:21:56.512 ], 00:21:56.512 "allow_any_host": true, 00:21:56.512 "hosts": [], 00:21:56.512 "serial_number": "SPDK00000000000001", 00:21:56.512 "model_number": "SPDK bdev Controller", 00:21:56.512 "max_namespaces": 2, 00:21:56.512 "min_cntlid": 1, 00:21:56.512 "max_cntlid": 65519, 00:21:56.512 "namespaces": [ 00:21:56.512 { 00:21:56.512 "nsid": 1, 00:21:56.512 "bdev_name": "Malloc0", 00:21:56.512 "name": "Malloc0", 00:21:56.512 "nguid": "C42E48C12B8B41538AE587EF1F6B2ED7", 00:21:56.512 "uuid": "c42e48c1-2b8b-4153-8ae5-87ef1f6b2ed7" 00:21:56.512 }, 00:21:56.512 { 00:21:56.512 "nsid": 2, 00:21:56.512 "bdev_name": "Malloc1", 00:21:56.512 "name": "Malloc1", 00:21:56.512 "nguid": "706594EB6D474F55ADC50A891B2D9BE4", 00:21:56.512 "uuid": "706594eb-6d47-4f55-adc5-0a891b2d9be4" 00:21:56.512 } 00:21:56.512 ] 00:21:56.512 } 00:21:56.512 ] 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3458658 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.512 rmmod nvme_tcp 00:21:56.512 rmmod nvme_fabrics 00:21:56.512 rmmod nvme_keyring 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3458465 ']' 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3458465 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3458465 ']' 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3458465 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.512 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3458465 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3458465' 00:21:56.770 killing process with pid 3458465 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3458465 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3458465 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.770 01:27:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.298 01:27:24 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:59.298 00:21:59.298 real 0m9.089s 00:21:59.298 user 0m7.136s 00:21:59.298 sys 0m4.423s 00:21:59.298 01:27:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.298 01:27:24 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:59.298 ************************************ 00:21:59.298 END TEST nvmf_aer 00:21:59.298 ************************************ 00:21:59.298 01:27:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:59.298 01:27:24 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:59.298 01:27:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:59.298 01:27:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.298 01:27:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.298 ************************************ 00:21:59.298 START TEST nvmf_async_init 00:21:59.298 ************************************ 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:59.298 * Looking for test storage... 00:21:59.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.298 01:27:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=372e22cfafdf4f5cb8a1c0d3124c2f79 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:59.299 01:27:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:04.553 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:04.553 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:04.553 Found net devices under 0000:86:00.0: cvl_0_0 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:04.553 Found net devices under 0000:86:00.1: cvl_0_1 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:04.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:22:04.553 00:22:04.553 --- 10.0.0.2 ping statistics --- 00:22:04.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.553 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:22:04.553 00:22:04.553 --- 10.0.0.1 ping statistics --- 00:22:04.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.553 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3462175 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3462175 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3462175 ']' 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:04.553 01:27:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.554 [2024-07-16 01:27:30.497361] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:22:04.554 [2024-07-16 01:27:30.497404] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.554 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.811 [2024-07-16 01:27:30.556697] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.811 [2024-07-16 01:27:30.634508] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.811 [2024-07-16 01:27:30.634543] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.811 [2024-07-16 01:27:30.634550] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.811 [2024-07-16 01:27:30.634556] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.811 [2024-07-16 01:27:30.634562] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.811 [2024-07-16 01:27:30.634579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.374 [2024-07-16 01:27:31.333400] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.374 null0 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.374 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 372e22cfafdf4f5cb8a1c0d3124c2f79 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.630 [2024-07-16 01:27:31.373588] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.630 nvme0n1 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.630 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.630 [ 00:22:05.630 { 00:22:05.630 "name": "nvme0n1", 00:22:05.630 "aliases": [ 00:22:05.630 "372e22cf-afdf-4f5c-b8a1-c0d3124c2f79" 00:22:05.630 ], 00:22:05.630 "product_name": "NVMe disk", 00:22:05.630 "block_size": 512, 00:22:05.630 "num_blocks": 2097152, 00:22:05.630 "uuid": "372e22cf-afdf-4f5c-b8a1-c0d3124c2f79", 00:22:05.630 "assigned_rate_limits": { 00:22:05.630 "rw_ios_per_sec": 0, 00:22:05.630 "rw_mbytes_per_sec": 0, 00:22:05.630 "r_mbytes_per_sec": 0, 00:22:05.630 "w_mbytes_per_sec": 0 00:22:05.630 }, 00:22:05.630 "claimed": false, 00:22:05.630 "zoned": false, 00:22:05.630 "supported_io_types": { 00:22:05.630 "read": true, 00:22:05.630 "write": true, 00:22:05.630 "unmap": false, 00:22:05.630 "flush": true, 00:22:05.630 "reset": true, 00:22:05.630 "nvme_admin": true, 00:22:05.630 "nvme_io": true, 00:22:05.630 "nvme_io_md": false, 00:22:05.630 "write_zeroes": true, 00:22:05.630 "zcopy": false, 00:22:05.630 "get_zone_info": false, 00:22:05.630 "zone_management": false, 00:22:05.630 "zone_append": false, 00:22:05.630 "compare": true, 00:22:05.630 "compare_and_write": true, 00:22:05.630 "abort": true, 00:22:05.630 "seek_hole": false, 00:22:05.630 "seek_data": false, 00:22:05.630 "copy": true, 00:22:05.630 "nvme_iov_md": false 00:22:05.630 }, 00:22:05.630 "memory_domains": [ 00:22:05.630 { 00:22:05.630 "dma_device_id": "system", 00:22:05.630 "dma_device_type": 1 00:22:05.630 } 00:22:05.630 ], 00:22:05.630 "driver_specific": { 00:22:05.630 "nvme": [ 00:22:05.630 { 00:22:05.630 "trid": { 00:22:05.630 "trtype": "TCP", 00:22:05.630 "adrfam": "IPv4", 00:22:05.630 "traddr": "10.0.0.2", 00:22:05.630 "trsvcid": "4420", 00:22:05.887 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:05.887 }, 00:22:05.887 "ctrlr_data": { 00:22:05.887 "cntlid": 1, 00:22:05.887 "vendor_id": "0x8086", 00:22:05.887 "model_number": "SPDK bdev Controller", 00:22:05.887 "serial_number": "00000000000000000000", 00:22:05.887 "firmware_revision": "24.09", 00:22:05.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:05.887 "oacs": { 00:22:05.887 "security": 0, 00:22:05.887 "format": 0, 00:22:05.887 "firmware": 0, 00:22:05.887 "ns_manage": 0 00:22:05.887 }, 00:22:05.887 "multi_ctrlr": true, 00:22:05.887 "ana_reporting": false 00:22:05.887 }, 00:22:05.887 "vs": { 00:22:05.887 "nvme_version": "1.3" 00:22:05.887 }, 00:22:05.887 "ns_data": { 00:22:05.887 "id": 1, 00:22:05.887 "can_share": true 00:22:05.887 } 00:22:05.887 } 00:22:05.887 ], 00:22:05.887 "mp_policy": "active_passive" 00:22:05.887 } 00:22:05.887 } 00:22:05.887 ] 00:22:05.887 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.888 [2024-07-16 01:27:31.622120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:05.888 [2024-07-16 01:27:31.622185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1449350 (9): Bad file descriptor 00:22:05.888 [2024-07-16 01:27:31.754418] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.888 [ 00:22:05.888 { 00:22:05.888 "name": "nvme0n1", 00:22:05.888 "aliases": [ 00:22:05.888 "372e22cf-afdf-4f5c-b8a1-c0d3124c2f79" 00:22:05.888 ], 00:22:05.888 "product_name": "NVMe disk", 00:22:05.888 "block_size": 512, 00:22:05.888 "num_blocks": 2097152, 00:22:05.888 "uuid": "372e22cf-afdf-4f5c-b8a1-c0d3124c2f79", 00:22:05.888 "assigned_rate_limits": { 00:22:05.888 "rw_ios_per_sec": 0, 00:22:05.888 "rw_mbytes_per_sec": 0, 00:22:05.888 "r_mbytes_per_sec": 0, 00:22:05.888 "w_mbytes_per_sec": 0 00:22:05.888 }, 00:22:05.888 "claimed": false, 00:22:05.888 "zoned": false, 00:22:05.888 "supported_io_types": { 00:22:05.888 "read": true, 00:22:05.888 "write": true, 00:22:05.888 "unmap": false, 00:22:05.888 "flush": true, 00:22:05.888 "reset": true, 00:22:05.888 "nvme_admin": true, 00:22:05.888 "nvme_io": true, 00:22:05.888 "nvme_io_md": false, 00:22:05.888 "write_zeroes": true, 00:22:05.888 "zcopy": false, 00:22:05.888 "get_zone_info": false, 00:22:05.888 "zone_management": false, 00:22:05.888 "zone_append": false, 00:22:05.888 "compare": true, 00:22:05.888 "compare_and_write": true, 00:22:05.888 "abort": true, 00:22:05.888 "seek_hole": false, 00:22:05.888 "seek_data": false, 00:22:05.888 "copy": true, 00:22:05.888 "nvme_iov_md": false 00:22:05.888 }, 00:22:05.888 "memory_domains": [ 00:22:05.888 { 00:22:05.888 "dma_device_id": "system", 00:22:05.888 "dma_device_type": 1 00:22:05.888 } 00:22:05.888 ], 00:22:05.888 "driver_specific": { 00:22:05.888 "nvme": [ 00:22:05.888 { 00:22:05.888 "trid": { 00:22:05.888 "trtype": "TCP", 00:22:05.888 "adrfam": "IPv4", 00:22:05.888 "traddr": "10.0.0.2", 00:22:05.888 "trsvcid": "4420", 00:22:05.888 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:05.888 }, 00:22:05.888 "ctrlr_data": { 00:22:05.888 "cntlid": 2, 00:22:05.888 "vendor_id": "0x8086", 00:22:05.888 "model_number": "SPDK bdev Controller", 00:22:05.888 "serial_number": "00000000000000000000", 00:22:05.888 "firmware_revision": "24.09", 00:22:05.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:05.888 "oacs": { 00:22:05.888 "security": 0, 00:22:05.888 "format": 0, 00:22:05.888 "firmware": 0, 00:22:05.888 "ns_manage": 0 00:22:05.888 }, 00:22:05.888 "multi_ctrlr": true, 00:22:05.888 "ana_reporting": false 00:22:05.888 }, 00:22:05.888 "vs": { 00:22:05.888 "nvme_version": "1.3" 00:22:05.888 }, 00:22:05.888 "ns_data": { 00:22:05.888 "id": 1, 00:22:05.888 "can_share": true 00:22:05.888 } 00:22:05.888 } 00:22:05.888 ], 00:22:05.888 "mp_policy": "active_passive" 00:22:05.888 } 00:22:05.888 } 00:22:05.888 ] 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Gjg5FWwoeW 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Gjg5FWwoeW 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.888 [2024-07-16 01:27:31.802656] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.888 [2024-07-16 01:27:31.802744] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Gjg5FWwoeW 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.888 [2024-07-16 01:27:31.810670] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Gjg5FWwoeW 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.888 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:05.888 [2024-07-16 01:27:31.818705] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:05.888 [2024-07-16 01:27:31.818739] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:06.146 nvme0n1 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:06.146 [ 00:22:06.146 { 00:22:06.146 "name": "nvme0n1", 00:22:06.146 "aliases": [ 00:22:06.146 "372e22cf-afdf-4f5c-b8a1-c0d3124c2f79" 00:22:06.146 ], 00:22:06.146 "product_name": "NVMe disk", 00:22:06.146 "block_size": 512, 00:22:06.146 "num_blocks": 2097152, 00:22:06.146 "uuid": "372e22cf-afdf-4f5c-b8a1-c0d3124c2f79", 00:22:06.146 "assigned_rate_limits": { 00:22:06.146 "rw_ios_per_sec": 0, 00:22:06.146 "rw_mbytes_per_sec": 0, 00:22:06.146 "r_mbytes_per_sec": 0, 00:22:06.146 "w_mbytes_per_sec": 0 00:22:06.146 }, 00:22:06.146 "claimed": false, 00:22:06.146 "zoned": false, 00:22:06.146 "supported_io_types": { 00:22:06.146 "read": true, 00:22:06.146 "write": true, 00:22:06.146 "unmap": false, 00:22:06.146 "flush": true, 00:22:06.146 "reset": true, 00:22:06.146 "nvme_admin": true, 00:22:06.146 "nvme_io": true, 00:22:06.146 "nvme_io_md": false, 00:22:06.146 "write_zeroes": true, 00:22:06.146 "zcopy": false, 00:22:06.146 "get_zone_info": false, 00:22:06.146 "zone_management": false, 00:22:06.146 "zone_append": false, 00:22:06.146 "compare": true, 00:22:06.146 "compare_and_write": true, 00:22:06.146 "abort": true, 00:22:06.146 "seek_hole": false, 00:22:06.146 "seek_data": false, 00:22:06.146 "copy": true, 00:22:06.146 "nvme_iov_md": false 00:22:06.146 }, 00:22:06.146 "memory_domains": [ 00:22:06.146 { 00:22:06.146 "dma_device_id": "system", 00:22:06.146 "dma_device_type": 1 00:22:06.146 } 00:22:06.146 ], 00:22:06.146 "driver_specific": { 00:22:06.146 "nvme": [ 00:22:06.146 { 00:22:06.146 "trid": { 00:22:06.146 "trtype": "TCP", 00:22:06.146 "adrfam": "IPv4", 00:22:06.146 "traddr": "10.0.0.2", 00:22:06.146 "trsvcid": "4421", 00:22:06.146 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:06.146 }, 00:22:06.146 "ctrlr_data": { 00:22:06.146 "cntlid": 3, 00:22:06.146 "vendor_id": "0x8086", 00:22:06.146 "model_number": "SPDK bdev Controller", 00:22:06.146 "serial_number": "00000000000000000000", 00:22:06.146 "firmware_revision": "24.09", 00:22:06.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:06.146 "oacs": { 00:22:06.146 "security": 0, 00:22:06.146 "format": 0, 00:22:06.146 "firmware": 0, 00:22:06.146 "ns_manage": 0 00:22:06.146 }, 00:22:06.146 "multi_ctrlr": true, 00:22:06.146 "ana_reporting": false 00:22:06.146 }, 00:22:06.146 "vs": { 00:22:06.146 "nvme_version": "1.3" 00:22:06.146 }, 00:22:06.146 "ns_data": { 00:22:06.146 "id": 1, 00:22:06.146 "can_share": true 00:22:06.146 } 00:22:06.146 } 00:22:06.146 ], 00:22:06.146 "mp_policy": "active_passive" 00:22:06.146 } 00:22:06.146 } 00:22:06.146 ] 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Gjg5FWwoeW 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.146 rmmod nvme_tcp 00:22:06.146 rmmod nvme_fabrics 00:22:06.146 rmmod nvme_keyring 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3462175 ']' 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3462175 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3462175 ']' 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3462175 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:06.146 01:27:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3462175 00:22:06.146 01:27:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:06.146 01:27:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:06.146 01:27:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3462175' 00:22:06.146 killing process with pid 3462175 00:22:06.146 01:27:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3462175 00:22:06.146 [2024-07-16 01:27:32.011421] app.c:1029:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:06.146 [2024-07-16 01:27:32.011445] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:06.146 01:27:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3462175 00:22:06.404 01:27:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.404 01:27:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.404 01:27:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.404 01:27:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.404 01:27:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.404 01:27:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.404 01:27:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:06.404 01:27:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.303 01:27:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:08.303 00:22:08.303 real 0m9.414s 00:22:08.303 user 0m3.421s 00:22:08.303 sys 0m4.439s 00:22:08.303 01:27:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.303 01:27:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:08.303 ************************************ 00:22:08.303 END TEST nvmf_async_init 00:22:08.303 ************************************ 00:22:08.303 01:27:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:08.303 01:27:34 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:08.303 01:27:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:08.303 01:27:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.303 01:27:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.560 ************************************ 00:22:08.560 START TEST dma 00:22:08.560 ************************************ 00:22:08.560 01:27:34 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:08.560 * Looking for test storage... 00:22:08.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:08.560 01:27:34 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.560 01:27:34 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.560 01:27:34 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.560 01:27:34 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.560 01:27:34 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.560 01:27:34 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.560 01:27:34 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.560 01:27:34 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:08.560 01:27:34 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.560 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:08.561 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.561 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.561 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.561 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.561 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.561 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.561 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.561 01:27:34 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.561 01:27:34 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:08.561 01:27:34 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:08.561 00:22:08.561 real 0m0.116s 00:22:08.561 user 0m0.059s 00:22:08.561 sys 0m0.065s 00:22:08.561 01:27:34 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.561 01:27:34 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:08.561 ************************************ 00:22:08.561 END TEST dma 00:22:08.561 ************************************ 00:22:08.561 01:27:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:08.561 01:27:34 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:08.561 01:27:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:08.561 01:27:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.561 01:27:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.561 ************************************ 00:22:08.561 START TEST nvmf_identify 00:22:08.561 ************************************ 00:22:08.561 01:27:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:08.819 * Looking for test storage... 00:22:08.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.819 01:27:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.820 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:08.820 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:08.820 01:27:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.820 01:27:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:14.073 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:14.073 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:14.073 Found net devices under 0000:86:00.0: cvl_0_0 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:14.073 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:14.074 Found net devices under 0000:86:00.1: cvl_0_1 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:14.074 01:27:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:14.074 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:14.074 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.074 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:14.074 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:14.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:22:14.332 00:22:14.332 --- 10.0.0.2 ping statistics --- 00:22:14.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.332 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:22:14.332 00:22:14.332 --- 10.0.0.1 ping statistics --- 00:22:14.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.332 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3465986 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3465986 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3465986 ']' 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.332 01:27:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.332 [2024-07-16 01:27:40.236707] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:22:14.332 [2024-07-16 01:27:40.236747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.332 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.332 [2024-07-16 01:27:40.294620] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.589 [2024-07-16 01:27:40.374948] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.589 [2024-07-16 01:27:40.374986] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.589 [2024-07-16 01:27:40.374993] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.589 [2024-07-16 01:27:40.374999] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.589 [2024-07-16 01:27:40.375005] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.589 [2024-07-16 01:27:40.375043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.589 [2024-07-16 01:27:40.375140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.589 [2024-07-16 01:27:40.375230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.589 [2024-07-16 01:27:40.375231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.153 [2024-07-16 01:27:41.052136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.153 Malloc0 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.153 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.153 [2024-07-16 01:27:41.140087] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.411 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.411 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:15.411 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.411 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.411 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.411 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:15.411 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.411 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.411 [ 00:22:15.411 { 00:22:15.411 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:15.411 "subtype": "Discovery", 00:22:15.411 "listen_addresses": [ 00:22:15.411 { 00:22:15.411 "trtype": "TCP", 00:22:15.411 "adrfam": "IPv4", 00:22:15.411 "traddr": "10.0.0.2", 00:22:15.411 "trsvcid": "4420" 00:22:15.411 } 00:22:15.411 ], 00:22:15.411 "allow_any_host": true, 00:22:15.411 "hosts": [] 00:22:15.411 }, 00:22:15.411 { 00:22:15.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.411 "subtype": "NVMe", 00:22:15.411 "listen_addresses": [ 00:22:15.411 { 00:22:15.411 "trtype": "TCP", 00:22:15.411 "adrfam": "IPv4", 00:22:15.411 "traddr": "10.0.0.2", 00:22:15.411 "trsvcid": "4420" 00:22:15.411 } 00:22:15.411 ], 00:22:15.411 "allow_any_host": true, 00:22:15.411 "hosts": [], 00:22:15.411 "serial_number": "SPDK00000000000001", 00:22:15.411 "model_number": "SPDK bdev Controller", 00:22:15.411 "max_namespaces": 32, 00:22:15.411 "min_cntlid": 1, 00:22:15.411 "max_cntlid": 65519, 00:22:15.411 "namespaces": [ 00:22:15.411 { 00:22:15.411 "nsid": 1, 00:22:15.411 "bdev_name": "Malloc0", 00:22:15.411 "name": "Malloc0", 00:22:15.411 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:15.411 "eui64": "ABCDEF0123456789", 00:22:15.411 "uuid": "6d6077a4-e3bd-4dc7-96f1-ec60e3e39c7d" 00:22:15.411 } 00:22:15.412 ] 00:22:15.412 } 00:22:15.412 ] 00:22:15.412 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.412 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:15.412 [2024-07-16 01:27:41.190266] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:22:15.412 [2024-07-16 01:27:41.190303] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466216 ] 00:22:15.412 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.412 [2024-07-16 01:27:41.217785] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:15.412 [2024-07-16 01:27:41.217831] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:15.412 [2024-07-16 01:27:41.217836] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:15.412 [2024-07-16 01:27:41.217849] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:15.412 [2024-07-16 01:27:41.217855] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:15.412 [2024-07-16 01:27:41.221609] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:15.412 [2024-07-16 01:27:41.221648] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfafec0 0 00:22:15.412 [2024-07-16 01:27:41.221748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:15.412 [2024-07-16 01:27:41.221757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:15.412 [2024-07-16 01:27:41.221761] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:15.412 [2024-07-16 01:27:41.221765] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:15.412 [2024-07-16 01:27:41.221788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.221794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.221797] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfafec0) 00:22:15.412 [2024-07-16 01:27:41.221810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:15.412 [2024-07-16 01:27:41.221823] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1032fc0, cid 0, qid 0 00:22:15.412 [2024-07-16 01:27:41.229346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.412 [2024-07-16 01:27:41.229354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.412 [2024-07-16 01:27:41.229357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1032fc0) on tqpair=0xfafec0 00:22:15.412 [2024-07-16 01:27:41.229370] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:15.412 [2024-07-16 01:27:41.229376] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:15.412 [2024-07-16 01:27:41.229381] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:15.412 [2024-07-16 01:27:41.229395] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229402] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfafec0) 00:22:15.412 [2024-07-16 01:27:41.229409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.412 [2024-07-16 01:27:41.229423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1032fc0, cid 0, qid 0 00:22:15.412 [2024-07-16 01:27:41.229583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.412 [2024-07-16 01:27:41.229588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.412 [2024-07-16 01:27:41.229591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1032fc0) on tqpair=0xfafec0 00:22:15.412 [2024-07-16 01:27:41.229599] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:15.412 [2024-07-16 01:27:41.229605] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:15.412 [2024-07-16 01:27:41.229611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfafec0) 00:22:15.412 [2024-07-16 01:27:41.229622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.412 [2024-07-16 01:27:41.229632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1032fc0, cid 0, qid 0 00:22:15.412 [2024-07-16 01:27:41.229695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.412 [2024-07-16 01:27:41.229701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.412 [2024-07-16 01:27:41.229704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1032fc0) on tqpair=0xfafec0 00:22:15.412 [2024-07-16 01:27:41.229712] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:15.412 [2024-07-16 01:27:41.229718] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:15.412 [2024-07-16 01:27:41.229724] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfafec0) 00:22:15.412 [2024-07-16 01:27:41.229735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.412 [2024-07-16 01:27:41.229745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1032fc0, cid 0, qid 0 00:22:15.412 [2024-07-16 01:27:41.229811] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.412 [2024-07-16 01:27:41.229817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.412 [2024-07-16 01:27:41.229820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229823] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1032fc0) on tqpair=0xfafec0 00:22:15.412 [2024-07-16 01:27:41.229827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:15.412 [2024-07-16 01:27:41.229835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfafec0) 00:22:15.412 [2024-07-16 01:27:41.229847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.412 [2024-07-16 01:27:41.229856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1032fc0, cid 0, qid 0 00:22:15.412 [2024-07-16 01:27:41.229917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.412 [2024-07-16 01:27:41.229924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.412 [2024-07-16 01:27:41.229928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.229931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1032fc0) on tqpair=0xfafec0 00:22:15.412 [2024-07-16 01:27:41.229935] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:15.412 [2024-07-16 01:27:41.229939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:15.412 [2024-07-16 01:27:41.229945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:15.412 [2024-07-16 01:27:41.230050] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:15.412 [2024-07-16 01:27:41.230054] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:15.412 [2024-07-16 01:27:41.230061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.230064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.230067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfafec0) 00:22:15.412 [2024-07-16 01:27:41.230073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.412 [2024-07-16 01:27:41.230082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1032fc0, cid 0, qid 0 00:22:15.412 [2024-07-16 01:27:41.230143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.412 [2024-07-16 01:27:41.230148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.412 [2024-07-16 01:27:41.230151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.230154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1032fc0) on tqpair=0xfafec0 00:22:15.412 [2024-07-16 01:27:41.230158] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:15.412 [2024-07-16 01:27:41.230166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.230169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.230172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfafec0) 00:22:15.412 [2024-07-16 01:27:41.230177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.412 [2024-07-16 01:27:41.230186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1032fc0, cid 0, qid 0 00:22:15.412 [2024-07-16 01:27:41.230248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.412 [2024-07-16 01:27:41.230254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.412 [2024-07-16 01:27:41.230257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.230260] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1032fc0) on tqpair=0xfafec0 00:22:15.412 [2024-07-16 01:27:41.230263] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:15.412 [2024-07-16 01:27:41.230267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:15.412 [2024-07-16 01:27:41.230273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:15.412 [2024-07-16 01:27:41.230283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:15.412 [2024-07-16 01:27:41.230291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.412 [2024-07-16 01:27:41.230295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfafec0) 00:22:15.412 [2024-07-16 01:27:41.230301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.412 [2024-07-16 01:27:41.230310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1032fc0, cid 0, qid 0 00:22:15.412 [2024-07-16 01:27:41.230432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.412 [2024-07-16 01:27:41.230437] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.413 [2024-07-16 01:27:41.230440] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230444] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfafec0): datao=0, datal=4096, cccid=0 00:22:15.413 [2024-07-16 01:27:41.230448] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1032fc0) on tqpair(0xfafec0): expected_datao=0, payload_size=4096 00:22:15.413 [2024-07-16 01:27:41.230452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230458] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230462] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.413 [2024-07-16 01:27:41.230488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.413 [2024-07-16 01:27:41.230491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1032fc0) on tqpair=0xfafec0 00:22:15.413 [2024-07-16 01:27:41.230500] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:15.413 [2024-07-16 01:27:41.230504] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:15.413 [2024-07-16 01:27:41.230508] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:15.413 [2024-07-16 01:27:41.230512] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:15.413 [2024-07-16 01:27:41.230516] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:15.413 [2024-07-16 01:27:41.230520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:15.413 [2024-07-16 01:27:41.230529] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:15.413 [2024-07-16 01:27:41.230536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfafec0) 00:22:15.413 [2024-07-16 01:27:41.230549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:15.413 [2024-07-16 01:27:41.230559] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1032fc0, cid 0, qid 0 00:22:15.413 [2024-07-16 01:27:41.230625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.413 [2024-07-16 01:27:41.230630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.413 [2024-07-16 01:27:41.230633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230636] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1032fc0) on tqpair=0xfafec0 00:22:15.413 [2024-07-16 01:27:41.230643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfafec0) 00:22:15.413 [2024-07-16 01:27:41.230656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.413 [2024-07-16 01:27:41.230661] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfafec0) 00:22:15.413 [2024-07-16 01:27:41.230672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.413 [2024-07-16 01:27:41.230677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfafec0) 00:22:15.413 [2024-07-16 01:27:41.230688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.413 [2024-07-16 01:27:41.230693] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.413 [2024-07-16 01:27:41.230703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.413 [2024-07-16 01:27:41.230707] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:15.413 [2024-07-16 01:27:41.230716] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:15.413 [2024-07-16 01:27:41.230722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfafec0) 00:22:15.413 [2024-07-16 01:27:41.230731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.413 [2024-07-16 01:27:41.230741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1032fc0, cid 0, qid 0 00:22:15.413 [2024-07-16 01:27:41.230745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033140, cid 1, qid 0 00:22:15.413 [2024-07-16 01:27:41.230749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10332c0, cid 2, qid 0 00:22:15.413 [2024-07-16 01:27:41.230753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.413 [2024-07-16 01:27:41.230757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10335c0, cid 4, qid 0 00:22:15.413 [2024-07-16 01:27:41.230857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.413 [2024-07-16 01:27:41.230862] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.413 [2024-07-16 01:27:41.230865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10335c0) on tqpair=0xfafec0 00:22:15.413 [2024-07-16 01:27:41.230873] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:15.413 [2024-07-16 01:27:41.230877] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:15.413 [2024-07-16 01:27:41.230885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfafec0) 00:22:15.413 [2024-07-16 01:27:41.230894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.413 [2024-07-16 01:27:41.230905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10335c0, cid 4, qid 0 00:22:15.413 [2024-07-16 01:27:41.230979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.413 [2024-07-16 01:27:41.230985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.413 [2024-07-16 01:27:41.230988] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.230991] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfafec0): datao=0, datal=4096, cccid=4 00:22:15.413 [2024-07-16 01:27:41.230995] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10335c0) on tqpair(0xfafec0): expected_datao=0, payload_size=4096 00:22:15.413 [2024-07-16 01:27:41.230998] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231004] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231007] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.413 [2024-07-16 01:27:41.231032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.413 [2024-07-16 01:27:41.231035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10335c0) on tqpair=0xfafec0 00:22:15.413 [2024-07-16 01:27:41.231049] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:15.413 [2024-07-16 01:27:41.231069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfafec0) 00:22:15.413 [2024-07-16 01:27:41.231079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.413 [2024-07-16 01:27:41.231084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfafec0) 00:22:15.413 [2024-07-16 01:27:41.231095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.413 [2024-07-16 01:27:41.231108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10335c0, cid 4, qid 0 00:22:15.413 [2024-07-16 01:27:41.231112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033740, cid 5, qid 0 00:22:15.413 [2024-07-16 01:27:41.231206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.413 [2024-07-16 01:27:41.231212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.413 [2024-07-16 01:27:41.231215] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231218] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfafec0): datao=0, datal=1024, cccid=4 00:22:15.413 [2024-07-16 01:27:41.231221] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10335c0) on tqpair(0xfafec0): expected_datao=0, payload_size=1024 00:22:15.413 [2024-07-16 01:27:41.231225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231230] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231233] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.413 [2024-07-16 01:27:41.231243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.413 [2024-07-16 01:27:41.231245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.231249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033740) on tqpair=0xfafec0 00:22:15.413 [2024-07-16 01:27:41.276343] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.413 [2024-07-16 01:27:41.276356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.413 [2024-07-16 01:27:41.276362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.276366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10335c0) on tqpair=0xfafec0 00:22:15.413 [2024-07-16 01:27:41.276381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.276385] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfafec0) 00:22:15.413 [2024-07-16 01:27:41.276393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.413 [2024-07-16 01:27:41.276409] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10335c0, cid 4, qid 0 00:22:15.413 [2024-07-16 01:27:41.276575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.413 [2024-07-16 01:27:41.276581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.413 [2024-07-16 01:27:41.276584] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.413 [2024-07-16 01:27:41.276587] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfafec0): datao=0, datal=3072, cccid=4 00:22:15.414 [2024-07-16 01:27:41.276591] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10335c0) on tqpair(0xfafec0): expected_datao=0, payload_size=3072 00:22:15.414 [2024-07-16 01:27:41.276595] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.414 [2024-07-16 01:27:41.276601] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.414 [2024-07-16 01:27:41.276604] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.414 [2024-07-16 01:27:41.276646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.414 [2024-07-16 01:27:41.276651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.414 [2024-07-16 01:27:41.276654] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.414 [2024-07-16 01:27:41.276658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10335c0) on tqpair=0xfafec0 00:22:15.414 [2024-07-16 01:27:41.276665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.414 [2024-07-16 01:27:41.276668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfafec0) 00:22:15.414 [2024-07-16 01:27:41.276673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.414 [2024-07-16 01:27:41.276685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10335c0, cid 4, qid 0 00:22:15.414 [2024-07-16 01:27:41.276761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.414 [2024-07-16 01:27:41.276766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.414 [2024-07-16 01:27:41.276769] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.414 [2024-07-16 01:27:41.276772] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfafec0): datao=0, datal=8, cccid=4 00:22:15.414 [2024-07-16 01:27:41.276776] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10335c0) on tqpair(0xfafec0): expected_datao=0, payload_size=8 00:22:15.414 [2024-07-16 01:27:41.276779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.414 [2024-07-16 01:27:41.276784] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.414 [2024-07-16 01:27:41.276787] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.414 [2024-07-16 01:27:41.318457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.414 [2024-07-16 01:27:41.318468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.414 [2024-07-16 01:27:41.318471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.414 [2024-07-16 01:27:41.318475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10335c0) on tqpair=0xfafec0 00:22:15.414 ===================================================== 00:22:15.414 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:15.414 ===================================================== 00:22:15.414 Controller Capabilities/Features 00:22:15.414 ================================ 00:22:15.414 Vendor ID: 0000 00:22:15.414 Subsystem Vendor ID: 0000 00:22:15.414 Serial Number: .................... 00:22:15.414 Model Number: ........................................ 00:22:15.414 Firmware Version: 24.09 00:22:15.414 Recommended Arb Burst: 0 00:22:15.414 IEEE OUI Identifier: 00 00 00 00:22:15.414 Multi-path I/O 00:22:15.414 May have multiple subsystem ports: No 00:22:15.414 May have multiple controllers: No 00:22:15.414 Associated with SR-IOV VF: No 00:22:15.414 Max Data Transfer Size: 131072 00:22:15.414 Max Number of Namespaces: 0 00:22:15.414 Max Number of I/O Queues: 1024 00:22:15.414 NVMe Specification Version (VS): 1.3 00:22:15.414 NVMe Specification Version (Identify): 1.3 00:22:15.414 Maximum Queue Entries: 128 00:22:15.414 Contiguous Queues Required: Yes 00:22:15.414 Arbitration Mechanisms Supported 00:22:15.414 Weighted Round Robin: Not Supported 00:22:15.414 Vendor Specific: Not Supported 00:22:15.414 Reset Timeout: 15000 ms 00:22:15.414 Doorbell Stride: 4 bytes 00:22:15.414 NVM Subsystem Reset: Not Supported 00:22:15.414 Command Sets Supported 00:22:15.414 NVM Command Set: Supported 00:22:15.414 Boot Partition: Not Supported 00:22:15.414 Memory Page Size Minimum: 4096 bytes 00:22:15.414 Memory Page Size Maximum: 4096 bytes 00:22:15.414 Persistent Memory Region: Not Supported 00:22:15.414 Optional Asynchronous Events Supported 00:22:15.414 Namespace Attribute Notices: Not Supported 00:22:15.414 Firmware Activation Notices: Not Supported 00:22:15.414 ANA Change Notices: Not Supported 00:22:15.414 PLE Aggregate Log Change Notices: Not Supported 00:22:15.414 LBA Status Info Alert Notices: Not Supported 00:22:15.414 EGE Aggregate Log Change Notices: Not Supported 00:22:15.414 Normal NVM Subsystem Shutdown event: Not Supported 00:22:15.414 Zone Descriptor Change Notices: Not Supported 00:22:15.414 Discovery Log Change Notices: Supported 00:22:15.414 Controller Attributes 00:22:15.414 128-bit Host Identifier: Not Supported 00:22:15.414 Non-Operational Permissive Mode: Not Supported 00:22:15.414 NVM Sets: Not Supported 00:22:15.414 Read Recovery Levels: Not Supported 00:22:15.414 Endurance Groups: Not Supported 00:22:15.414 Predictable Latency Mode: Not Supported 00:22:15.414 Traffic Based Keep ALive: Not Supported 00:22:15.414 Namespace Granularity: Not Supported 00:22:15.414 SQ Associations: Not Supported 00:22:15.414 UUID List: Not Supported 00:22:15.414 Multi-Domain Subsystem: Not Supported 00:22:15.414 Fixed Capacity Management: Not Supported 00:22:15.414 Variable Capacity Management: Not Supported 00:22:15.414 Delete Endurance Group: Not Supported 00:22:15.414 Delete NVM Set: Not Supported 00:22:15.414 Extended LBA Formats Supported: Not Supported 00:22:15.414 Flexible Data Placement Supported: Not Supported 00:22:15.414 00:22:15.414 Controller Memory Buffer Support 00:22:15.414 ================================ 00:22:15.414 Supported: No 00:22:15.414 00:22:15.414 Persistent Memory Region Support 00:22:15.414 ================================ 00:22:15.414 Supported: No 00:22:15.414 00:22:15.414 Admin Command Set Attributes 00:22:15.414 ============================ 00:22:15.414 Security Send/Receive: Not Supported 00:22:15.414 Format NVM: Not Supported 00:22:15.414 Firmware Activate/Download: Not Supported 00:22:15.414 Namespace Management: Not Supported 00:22:15.414 Device Self-Test: Not Supported 00:22:15.414 Directives: Not Supported 00:22:15.414 NVMe-MI: Not Supported 00:22:15.414 Virtualization Management: Not Supported 00:22:15.414 Doorbell Buffer Config: Not Supported 00:22:15.414 Get LBA Status Capability: Not Supported 00:22:15.414 Command & Feature Lockdown Capability: Not Supported 00:22:15.414 Abort Command Limit: 1 00:22:15.414 Async Event Request Limit: 4 00:22:15.414 Number of Firmware Slots: N/A 00:22:15.414 Firmware Slot 1 Read-Only: N/A 00:22:15.414 Firmware Activation Without Reset: N/A 00:22:15.414 Multiple Update Detection Support: N/A 00:22:15.414 Firmware Update Granularity: No Information Provided 00:22:15.414 Per-Namespace SMART Log: No 00:22:15.414 Asymmetric Namespace Access Log Page: Not Supported 00:22:15.414 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:15.414 Command Effects Log Page: Not Supported 00:22:15.414 Get Log Page Extended Data: Supported 00:22:15.414 Telemetry Log Pages: Not Supported 00:22:15.414 Persistent Event Log Pages: Not Supported 00:22:15.414 Supported Log Pages Log Page: May Support 00:22:15.414 Commands Supported & Effects Log Page: Not Supported 00:22:15.414 Feature Identifiers & Effects Log Page:May Support 00:22:15.414 NVMe-MI Commands & Effects Log Page: May Support 00:22:15.414 Data Area 4 for Telemetry Log: Not Supported 00:22:15.414 Error Log Page Entries Supported: 128 00:22:15.414 Keep Alive: Not Supported 00:22:15.414 00:22:15.414 NVM Command Set Attributes 00:22:15.414 ========================== 00:22:15.414 Submission Queue Entry Size 00:22:15.414 Max: 1 00:22:15.414 Min: 1 00:22:15.414 Completion Queue Entry Size 00:22:15.414 Max: 1 00:22:15.414 Min: 1 00:22:15.414 Number of Namespaces: 0 00:22:15.414 Compare Command: Not Supported 00:22:15.414 Write Uncorrectable Command: Not Supported 00:22:15.414 Dataset Management Command: Not Supported 00:22:15.414 Write Zeroes Command: Not Supported 00:22:15.414 Set Features Save Field: Not Supported 00:22:15.414 Reservations: Not Supported 00:22:15.414 Timestamp: Not Supported 00:22:15.414 Copy: Not Supported 00:22:15.414 Volatile Write Cache: Not Present 00:22:15.414 Atomic Write Unit (Normal): 1 00:22:15.414 Atomic Write Unit (PFail): 1 00:22:15.414 Atomic Compare & Write Unit: 1 00:22:15.414 Fused Compare & Write: Supported 00:22:15.414 Scatter-Gather List 00:22:15.414 SGL Command Set: Supported 00:22:15.414 SGL Keyed: Supported 00:22:15.414 SGL Bit Bucket Descriptor: Not Supported 00:22:15.414 SGL Metadata Pointer: Not Supported 00:22:15.414 Oversized SGL: Not Supported 00:22:15.414 SGL Metadata Address: Not Supported 00:22:15.414 SGL Offset: Supported 00:22:15.414 Transport SGL Data Block: Not Supported 00:22:15.414 Replay Protected Memory Block: Not Supported 00:22:15.414 00:22:15.414 Firmware Slot Information 00:22:15.414 ========================= 00:22:15.414 Active slot: 0 00:22:15.414 00:22:15.414 00:22:15.414 Error Log 00:22:15.414 ========= 00:22:15.414 00:22:15.414 Active Namespaces 00:22:15.414 ================= 00:22:15.414 Discovery Log Page 00:22:15.414 ================== 00:22:15.414 Generation Counter: 2 00:22:15.414 Number of Records: 2 00:22:15.414 Record Format: 0 00:22:15.414 00:22:15.414 Discovery Log Entry 0 00:22:15.414 ---------------------- 00:22:15.414 Transport Type: 3 (TCP) 00:22:15.414 Address Family: 1 (IPv4) 00:22:15.414 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:15.414 Entry Flags: 00:22:15.414 Duplicate Returned Information: 1 00:22:15.414 Explicit Persistent Connection Support for Discovery: 1 00:22:15.414 Transport Requirements: 00:22:15.414 Secure Channel: Not Required 00:22:15.414 Port ID: 0 (0x0000) 00:22:15.414 Controller ID: 65535 (0xffff) 00:22:15.414 Admin Max SQ Size: 128 00:22:15.414 Transport Service Identifier: 4420 00:22:15.415 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:15.415 Transport Address: 10.0.0.2 00:22:15.415 Discovery Log Entry 1 00:22:15.415 ---------------------- 00:22:15.415 Transport Type: 3 (TCP) 00:22:15.415 Address Family: 1 (IPv4) 00:22:15.415 Subsystem Type: 2 (NVM Subsystem) 00:22:15.415 Entry Flags: 00:22:15.415 Duplicate Returned Information: 0 00:22:15.415 Explicit Persistent Connection Support for Discovery: 0 00:22:15.415 Transport Requirements: 00:22:15.415 Secure Channel: Not Required 00:22:15.415 Port ID: 0 (0x0000) 00:22:15.415 Controller ID: 65535 (0xffff) 00:22:15.415 Admin Max SQ Size: 128 00:22:15.415 Transport Service Identifier: 4420 00:22:15.415 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:15.415 Transport Address: 10.0.0.2 [2024-07-16 01:27:41.318552] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:15.415 [2024-07-16 01:27:41.318563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1032fc0) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.318571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.415 [2024-07-16 01:27:41.318575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033140) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.318579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.415 [2024-07-16 01:27:41.318583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10332c0) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.318587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.415 [2024-07-16 01:27:41.318591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.318595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.415 [2024-07-16 01:27:41.318603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.318606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.318610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.415 [2024-07-16 01:27:41.318617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.415 [2024-07-16 01:27:41.318629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.415 [2024-07-16 01:27:41.318692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.415 [2024-07-16 01:27:41.318697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.415 [2024-07-16 01:27:41.318700] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.318703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.318709] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.318712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.318715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.415 [2024-07-16 01:27:41.318721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.415 [2024-07-16 01:27:41.318733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.415 [2024-07-16 01:27:41.318809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.415 [2024-07-16 01:27:41.318814] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.415 [2024-07-16 01:27:41.318817] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.318820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.318825] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:15.415 [2024-07-16 01:27:41.318828] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:15.415 [2024-07-16 01:27:41.318836] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.318839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.318842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.415 [2024-07-16 01:27:41.318848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.415 [2024-07-16 01:27:41.318857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.415 [2024-07-16 01:27:41.318920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.415 [2024-07-16 01:27:41.318925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.415 [2024-07-16 01:27:41.318930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.318933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.318941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.318944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.318947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.415 [2024-07-16 01:27:41.318953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.415 [2024-07-16 01:27:41.318962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.415 [2024-07-16 01:27:41.319042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.415 [2024-07-16 01:27:41.319048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.415 [2024-07-16 01:27:41.319051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.319061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319065] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.415 [2024-07-16 01:27:41.319073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.415 [2024-07-16 01:27:41.319081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.415 [2024-07-16 01:27:41.319139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.415 [2024-07-16 01:27:41.319144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.415 [2024-07-16 01:27:41.319147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.319157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.415 [2024-07-16 01:27:41.319169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.415 [2024-07-16 01:27:41.319178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.415 [2024-07-16 01:27:41.319258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.415 [2024-07-16 01:27:41.319264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.415 [2024-07-16 01:27:41.319267] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.319277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319283] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.415 [2024-07-16 01:27:41.319289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.415 [2024-07-16 01:27:41.319297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.415 [2024-07-16 01:27:41.319366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.415 [2024-07-16 01:27:41.319371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.415 [2024-07-16 01:27:41.319374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.319483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.415 [2024-07-16 01:27:41.319495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.415 [2024-07-16 01:27:41.319505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.415 [2024-07-16 01:27:41.319569] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.415 [2024-07-16 01:27:41.319574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.415 [2024-07-16 01:27:41.319577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.319588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.415 [2024-07-16 01:27:41.319599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.415 [2024-07-16 01:27:41.319608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.415 [2024-07-16 01:27:41.319668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.415 [2024-07-16 01:27:41.319674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.415 [2024-07-16 01:27:41.319677] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.415 [2024-07-16 01:27:41.319687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.415 [2024-07-16 01:27:41.319699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.415 [2024-07-16 01:27:41.319708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.415 [2024-07-16 01:27:41.319767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.415 [2024-07-16 01:27:41.319773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.415 [2024-07-16 01:27:41.319776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.415 [2024-07-16 01:27:41.319779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.416 [2024-07-16 01:27:41.319787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.319790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.319793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.416 [2024-07-16 01:27:41.319798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.416 [2024-07-16 01:27:41.319807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.416 [2024-07-16 01:27:41.319894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.416 [2024-07-16 01:27:41.319899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.416 [2024-07-16 01:27:41.319902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.319905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.416 [2024-07-16 01:27:41.319915] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.319919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.319922] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.416 [2024-07-16 01:27:41.319927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.416 [2024-07-16 01:27:41.319936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.416 [2024-07-16 01:27:41.320002] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.416 [2024-07-16 01:27:41.320007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.416 [2024-07-16 01:27:41.320010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.320014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.416 [2024-07-16 01:27:41.320021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.320024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.320027] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.416 [2024-07-16 01:27:41.320033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.416 [2024-07-16 01:27:41.320041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.416 [2024-07-16 01:27:41.320118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.416 [2024-07-16 01:27:41.320123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.416 [2024-07-16 01:27:41.320126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.320129] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.416 [2024-07-16 01:27:41.320137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.320140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.320143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.416 [2024-07-16 01:27:41.320148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.416 [2024-07-16 01:27:41.320157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.416 [2024-07-16 01:27:41.320216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.416 [2024-07-16 01:27:41.320222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.416 [2024-07-16 01:27:41.320225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.320228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.416 [2024-07-16 01:27:41.320235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.320238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.320241] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.416 [2024-07-16 01:27:41.320247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.416 [2024-07-16 01:27:41.320255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.416 [2024-07-16 01:27:41.320332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.416 [2024-07-16 01:27:41.324341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.416 [2024-07-16 01:27:41.324347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.324350] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.416 [2024-07-16 01:27:41.324359] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.324365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.324368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfafec0) 00:22:15.416 [2024-07-16 01:27:41.324374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.416 [2024-07-16 01:27:41.324385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1033440, cid 3, qid 0 00:22:15.416 [2024-07-16 01:27:41.324539] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.416 [2024-07-16 01:27:41.324544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.416 [2024-07-16 01:27:41.324547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.416 [2024-07-16 01:27:41.324550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1033440) on tqpair=0xfafec0 00:22:15.416 [2024-07-16 01:27:41.324557] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:15.416 00:22:15.416 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:15.416 [2024-07-16 01:27:41.360298] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:22:15.416 [2024-07-16 01:27:41.360331] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466233 ] 00:22:15.416 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.416 [2024-07-16 01:27:41.388260] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:15.416 [2024-07-16 01:27:41.388298] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:15.416 [2024-07-16 01:27:41.388303] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:15.416 [2024-07-16 01:27:41.388313] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:15.416 [2024-07-16 01:27:41.388318] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:15.416 [2024-07-16 01:27:41.388742] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:15.416 [2024-07-16 01:27:41.388772] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2155ec0 0 00:22:15.676 [2024-07-16 01:27:41.399350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:15.676 [2024-07-16 01:27:41.399370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:15.676 [2024-07-16 01:27:41.399375] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:15.676 [2024-07-16 01:27:41.399378] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:15.676 [2024-07-16 01:27:41.399399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.399404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.399408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2155ec0) 00:22:15.676 [2024-07-16 01:27:41.399418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:15.676 [2024-07-16 01:27:41.399434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d8fc0, cid 0, qid 0 00:22:15.676 [2024-07-16 01:27:41.407347] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.676 [2024-07-16 01:27:41.407359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.676 [2024-07-16 01:27:41.407362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d8fc0) on tqpair=0x2155ec0 00:22:15.676 [2024-07-16 01:27:41.407380] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:15.676 [2024-07-16 01:27:41.407386] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:15.676 [2024-07-16 01:27:41.407391] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:15.676 [2024-07-16 01:27:41.407402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2155ec0) 00:22:15.676 [2024-07-16 01:27:41.407416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.676 [2024-07-16 01:27:41.407430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d8fc0, cid 0, qid 0 00:22:15.676 [2024-07-16 01:27:41.407592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.676 [2024-07-16 01:27:41.407598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.676 [2024-07-16 01:27:41.407601] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d8fc0) on tqpair=0x2155ec0 00:22:15.676 [2024-07-16 01:27:41.407608] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:15.676 [2024-07-16 01:27:41.407614] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:15.676 [2024-07-16 01:27:41.407620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2155ec0) 00:22:15.676 [2024-07-16 01:27:41.407632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.676 [2024-07-16 01:27:41.407642] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d8fc0, cid 0, qid 0 00:22:15.676 [2024-07-16 01:27:41.407707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.676 [2024-07-16 01:27:41.407712] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.676 [2024-07-16 01:27:41.407715] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407719] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d8fc0) on tqpair=0x2155ec0 00:22:15.676 [2024-07-16 01:27:41.407723] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:15.676 [2024-07-16 01:27:41.407729] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:15.676 [2024-07-16 01:27:41.407735] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2155ec0) 00:22:15.676 [2024-07-16 01:27:41.407746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.676 [2024-07-16 01:27:41.407755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d8fc0, cid 0, qid 0 00:22:15.676 [2024-07-16 01:27:41.407817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.676 [2024-07-16 01:27:41.407823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.676 [2024-07-16 01:27:41.407825] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d8fc0) on tqpair=0x2155ec0 00:22:15.676 [2024-07-16 01:27:41.407835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:15.676 [2024-07-16 01:27:41.407842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2155ec0) 00:22:15.676 [2024-07-16 01:27:41.407854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.676 [2024-07-16 01:27:41.407863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d8fc0, cid 0, qid 0 00:22:15.676 [2024-07-16 01:27:41.407923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.676 [2024-07-16 01:27:41.407929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.676 [2024-07-16 01:27:41.407932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.407935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d8fc0) on tqpair=0x2155ec0 00:22:15.676 [2024-07-16 01:27:41.407938] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:15.676 [2024-07-16 01:27:41.407942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:15.676 [2024-07-16 01:27:41.407948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:15.676 [2024-07-16 01:27:41.408053] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:15.676 [2024-07-16 01:27:41.408056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:15.676 [2024-07-16 01:27:41.408063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.408066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.676 [2024-07-16 01:27:41.408069] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2155ec0) 00:22:15.677 [2024-07-16 01:27:41.408074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.677 [2024-07-16 01:27:41.408083] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d8fc0, cid 0, qid 0 00:22:15.677 [2024-07-16 01:27:41.408146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.677 [2024-07-16 01:27:41.408152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.677 [2024-07-16 01:27:41.408155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.408158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d8fc0) on tqpair=0x2155ec0 00:22:15.677 [2024-07-16 01:27:41.408161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:15.677 [2024-07-16 01:27:41.408169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.408172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.408175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2155ec0) 00:22:15.677 [2024-07-16 01:27:41.408181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.677 [2024-07-16 01:27:41.408189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d8fc0, cid 0, qid 0 00:22:15.677 [2024-07-16 01:27:41.408251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.677 [2024-07-16 01:27:41.408256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.677 [2024-07-16 01:27:41.408259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.408264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d8fc0) on tqpair=0x2155ec0 00:22:15.677 [2024-07-16 01:27:41.408267] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:15.677 [2024-07-16 01:27:41.408271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:15.677 [2024-07-16 01:27:41.408277] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:15.677 [2024-07-16 01:27:41.408287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:15.677 [2024-07-16 01:27:41.408295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.408298] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2155ec0) 00:22:15.677 [2024-07-16 01:27:41.408303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.677 [2024-07-16 01:27:41.408312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d8fc0, cid 0, qid 0 00:22:15.677 [2024-07-16 01:27:41.408423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.677 [2024-07-16 01:27:41.408429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.677 [2024-07-16 01:27:41.408432] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.408435] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2155ec0): datao=0, datal=4096, cccid=0 00:22:15.677 [2024-07-16 01:27:41.408439] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d8fc0) on tqpair(0x2155ec0): expected_datao=0, payload_size=4096 00:22:15.677 [2024-07-16 01:27:41.408443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.408457] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.408461] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.677 [2024-07-16 01:27:41.449458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.677 [2024-07-16 01:27:41.449461] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d8fc0) on tqpair=0x2155ec0 00:22:15.677 [2024-07-16 01:27:41.449471] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:15.677 [2024-07-16 01:27:41.449475] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:15.677 [2024-07-16 01:27:41.449479] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:15.677 [2024-07-16 01:27:41.449483] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:15.677 [2024-07-16 01:27:41.449486] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:15.677 [2024-07-16 01:27:41.449490] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:15.677 [2024-07-16 01:27:41.449499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:15.677 [2024-07-16 01:27:41.449508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449515] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2155ec0) 00:22:15.677 [2024-07-16 01:27:41.449521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:15.677 [2024-07-16 01:27:41.449534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d8fc0, cid 0, qid 0 00:22:15.677 [2024-07-16 01:27:41.449597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.677 [2024-07-16 01:27:41.449603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.677 [2024-07-16 01:27:41.449606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449609] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d8fc0) on tqpair=0x2155ec0 00:22:15.677 [2024-07-16 01:27:41.449614] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2155ec0) 00:22:15.677 [2024-07-16 01:27:41.449625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.677 [2024-07-16 01:27:41.449631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2155ec0) 00:22:15.677 [2024-07-16 01:27:41.449641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.677 [2024-07-16 01:27:41.449646] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2155ec0) 00:22:15.677 [2024-07-16 01:27:41.449657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.677 [2024-07-16 01:27:41.449662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449668] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.677 [2024-07-16 01:27:41.449672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.677 [2024-07-16 01:27:41.449676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:15.677 [2024-07-16 01:27:41.449685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:15.677 [2024-07-16 01:27:41.449691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2155ec0) 00:22:15.677 [2024-07-16 01:27:41.449699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.677 [2024-07-16 01:27:41.449710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d8fc0, cid 0, qid 0 00:22:15.677 [2024-07-16 01:27:41.449714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9140, cid 1, qid 0 00:22:15.677 [2024-07-16 01:27:41.449718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d92c0, cid 2, qid 0 00:22:15.677 [2024-07-16 01:27:41.449722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.677 [2024-07-16 01:27:41.449726] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d95c0, cid 4, qid 0 00:22:15.677 [2024-07-16 01:27:41.449820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.677 [2024-07-16 01:27:41.449826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.677 [2024-07-16 01:27:41.449829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d95c0) on tqpair=0x2155ec0 00:22:15.677 [2024-07-16 01:27:41.449838] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:15.677 [2024-07-16 01:27:41.449842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:15.677 [2024-07-16 01:27:41.449850] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:15.677 [2024-07-16 01:27:41.449856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:15.677 [2024-07-16 01:27:41.449861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2155ec0) 00:22:15.677 [2024-07-16 01:27:41.449873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:15.677 [2024-07-16 01:27:41.449882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d95c0, cid 4, qid 0 00:22:15.677 [2024-07-16 01:27:41.449945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.677 [2024-07-16 01:27:41.449952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.677 [2024-07-16 01:27:41.449955] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.449960] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d95c0) on tqpair=0x2155ec0 00:22:15.677 [2024-07-16 01:27:41.450014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:15.677 [2024-07-16 01:27:41.450023] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:15.677 [2024-07-16 01:27:41.450030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.450033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2155ec0) 00:22:15.677 [2024-07-16 01:27:41.450038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.677 [2024-07-16 01:27:41.450047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d95c0, cid 4, qid 0 00:22:15.677 [2024-07-16 01:27:41.450120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.677 [2024-07-16 01:27:41.450126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.677 [2024-07-16 01:27:41.450129] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.677 [2024-07-16 01:27:41.450132] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2155ec0): datao=0, datal=4096, cccid=4 00:22:15.678 [2024-07-16 01:27:41.450136] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d95c0) on tqpair(0x2155ec0): expected_datao=0, payload_size=4096 00:22:15.678 [2024-07-16 01:27:41.450139] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.450153] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.450157] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.495344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.678 [2024-07-16 01:27:41.495355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.678 [2024-07-16 01:27:41.495358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.495362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d95c0) on tqpair=0x2155ec0 00:22:15.678 [2024-07-16 01:27:41.495370] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:15.678 [2024-07-16 01:27:41.495380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:15.678 [2024-07-16 01:27:41.495391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:15.678 [2024-07-16 01:27:41.495398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.495401] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2155ec0) 00:22:15.678 [2024-07-16 01:27:41.495407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.678 [2024-07-16 01:27:41.495419] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d95c0, cid 4, qid 0 00:22:15.678 [2024-07-16 01:27:41.495577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.678 [2024-07-16 01:27:41.495583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.678 [2024-07-16 01:27:41.495586] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.495589] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2155ec0): datao=0, datal=4096, cccid=4 00:22:15.678 [2024-07-16 01:27:41.495593] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d95c0) on tqpair(0x2155ec0): expected_datao=0, payload_size=4096 00:22:15.678 [2024-07-16 01:27:41.495597] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.495609] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.495613] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.537467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.678 [2024-07-16 01:27:41.537475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.678 [2024-07-16 01:27:41.537478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.537481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d95c0) on tqpair=0x2155ec0 00:22:15.678 [2024-07-16 01:27:41.537497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:15.678 [2024-07-16 01:27:41.537505] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:15.678 [2024-07-16 01:27:41.537512] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.537515] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2155ec0) 00:22:15.678 [2024-07-16 01:27:41.537521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.678 [2024-07-16 01:27:41.537532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d95c0, cid 4, qid 0 00:22:15.678 [2024-07-16 01:27:41.537607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.678 [2024-07-16 01:27:41.537612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.678 [2024-07-16 01:27:41.537615] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.537618] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2155ec0): datao=0, datal=4096, cccid=4 00:22:15.678 [2024-07-16 01:27:41.537622] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d95c0) on tqpair(0x2155ec0): expected_datao=0, payload_size=4096 00:22:15.678 [2024-07-16 01:27:41.537625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.537639] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.537642] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.678 [2024-07-16 01:27:41.583355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.678 [2024-07-16 01:27:41.583359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583364] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d95c0) on tqpair=0x2155ec0 00:22:15.678 [2024-07-16 01:27:41.583373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:15.678 [2024-07-16 01:27:41.583380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:15.678 [2024-07-16 01:27:41.583388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:15.678 [2024-07-16 01:27:41.583393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:15.678 [2024-07-16 01:27:41.583397] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:15.678 [2024-07-16 01:27:41.583402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:15.678 [2024-07-16 01:27:41.583407] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:15.678 [2024-07-16 01:27:41.583411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:15.678 [2024-07-16 01:27:41.583416] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:15.678 [2024-07-16 01:27:41.583428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2155ec0) 00:22:15.678 [2024-07-16 01:27:41.583438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.678 [2024-07-16 01:27:41.583443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583446] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2155ec0) 00:22:15.678 [2024-07-16 01:27:41.583454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.678 [2024-07-16 01:27:41.583468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d95c0, cid 4, qid 0 00:22:15.678 [2024-07-16 01:27:41.583473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9740, cid 5, qid 0 00:22:15.678 [2024-07-16 01:27:41.583558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.678 [2024-07-16 01:27:41.583563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.678 [2024-07-16 01:27:41.583566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d95c0) on tqpair=0x2155ec0 00:22:15.678 [2024-07-16 01:27:41.583575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.678 [2024-07-16 01:27:41.583580] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.678 [2024-07-16 01:27:41.583582] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9740) on tqpair=0x2155ec0 00:22:15.678 [2024-07-16 01:27:41.583593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2155ec0) 00:22:15.678 [2024-07-16 01:27:41.583602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.678 [2024-07-16 01:27:41.583612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9740, cid 5, qid 0 00:22:15.678 [2024-07-16 01:27:41.583680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.678 [2024-07-16 01:27:41.583688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.678 [2024-07-16 01:27:41.583691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9740) on tqpair=0x2155ec0 00:22:15.678 [2024-07-16 01:27:41.583701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2155ec0) 00:22:15.678 [2024-07-16 01:27:41.583710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.678 [2024-07-16 01:27:41.583718] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9740, cid 5, qid 0 00:22:15.678 [2024-07-16 01:27:41.583796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.678 [2024-07-16 01:27:41.583801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.678 [2024-07-16 01:27:41.583804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583807] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9740) on tqpair=0x2155ec0 00:22:15.678 [2024-07-16 01:27:41.583814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2155ec0) 00:22:15.678 [2024-07-16 01:27:41.583823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.678 [2024-07-16 01:27:41.583832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9740, cid 5, qid 0 00:22:15.678 [2024-07-16 01:27:41.583891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.678 [2024-07-16 01:27:41.583896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.678 [2024-07-16 01:27:41.583899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583902] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9740) on tqpair=0x2155ec0 00:22:15.678 [2024-07-16 01:27:41.583914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583918] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2155ec0) 00:22:15.678 [2024-07-16 01:27:41.583924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.678 [2024-07-16 01:27:41.583930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2155ec0) 00:22:15.678 [2024-07-16 01:27:41.583938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.678 [2024-07-16 01:27:41.583944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.678 [2024-07-16 01:27:41.583947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2155ec0) 00:22:15.678 [2024-07-16 01:27:41.583952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.678 [2024-07-16 01:27:41.583958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.583961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2155ec0) 00:22:15.679 [2024-07-16 01:27:41.583966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.679 [2024-07-16 01:27:41.583976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9740, cid 5, qid 0 00:22:15.679 [2024-07-16 01:27:41.583980] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d95c0, cid 4, qid 0 00:22:15.679 [2024-07-16 01:27:41.583986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d98c0, cid 6, qid 0 00:22:15.679 [2024-07-16 01:27:41.583990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9a40, cid 7, qid 0 00:22:15.679 [2024-07-16 01:27:41.584129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.679 [2024-07-16 01:27:41.584135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.679 [2024-07-16 01:27:41.584138] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584141] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2155ec0): datao=0, datal=8192, cccid=5 00:22:15.679 [2024-07-16 01:27:41.584144] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d9740) on tqpair(0x2155ec0): expected_datao=0, payload_size=8192 00:22:15.679 [2024-07-16 01:27:41.584148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584174] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584177] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.679 [2024-07-16 01:27:41.584187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.679 [2024-07-16 01:27:41.584190] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584192] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2155ec0): datao=0, datal=512, cccid=4 00:22:15.679 [2024-07-16 01:27:41.584196] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d95c0) on tqpair(0x2155ec0): expected_datao=0, payload_size=512 00:22:15.679 [2024-07-16 01:27:41.584199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584205] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584208] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.679 [2024-07-16 01:27:41.584217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.679 [2024-07-16 01:27:41.584220] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584222] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2155ec0): datao=0, datal=512, cccid=6 00:22:15.679 [2024-07-16 01:27:41.584226] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d98c0) on tqpair(0x2155ec0): expected_datao=0, payload_size=512 00:22:15.679 [2024-07-16 01:27:41.584230] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584235] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584238] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:15.679 [2024-07-16 01:27:41.584247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:15.679 [2024-07-16 01:27:41.584250] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584252] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2155ec0): datao=0, datal=4096, cccid=7 00:22:15.679 [2024-07-16 01:27:41.584256] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d9a40) on tqpair(0x2155ec0): expected_datao=0, payload_size=4096 00:22:15.679 [2024-07-16 01:27:41.584260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584265] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584268] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.679 [2024-07-16 01:27:41.584280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.679 [2024-07-16 01:27:41.584283] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9740) on tqpair=0x2155ec0 00:22:15.679 [2024-07-16 01:27:41.584296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.679 [2024-07-16 01:27:41.584301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.679 [2024-07-16 01:27:41.584304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d95c0) on tqpair=0x2155ec0 00:22:15.679 [2024-07-16 01:27:41.584315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.679 [2024-07-16 01:27:41.584320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.679 [2024-07-16 01:27:41.584323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584326] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d98c0) on tqpair=0x2155ec0 00:22:15.679 [2024-07-16 01:27:41.584332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.679 [2024-07-16 01:27:41.584341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.679 [2024-07-16 01:27:41.584344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.679 [2024-07-16 01:27:41.584347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9a40) on tqpair=0x2155ec0 00:22:15.679 ===================================================== 00:22:15.679 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.679 ===================================================== 00:22:15.679 Controller Capabilities/Features 00:22:15.679 ================================ 00:22:15.679 Vendor ID: 8086 00:22:15.679 Subsystem Vendor ID: 8086 00:22:15.679 Serial Number: SPDK00000000000001 00:22:15.679 Model Number: SPDK bdev Controller 00:22:15.679 Firmware Version: 24.09 00:22:15.679 Recommended Arb Burst: 6 00:22:15.679 IEEE OUI Identifier: e4 d2 5c 00:22:15.679 Multi-path I/O 00:22:15.679 May have multiple subsystem ports: Yes 00:22:15.679 May have multiple controllers: Yes 00:22:15.679 Associated with SR-IOV VF: No 00:22:15.679 Max Data Transfer Size: 131072 00:22:15.679 Max Number of Namespaces: 32 00:22:15.679 Max Number of I/O Queues: 127 00:22:15.679 NVMe Specification Version (VS): 1.3 00:22:15.679 NVMe Specification Version (Identify): 1.3 00:22:15.679 Maximum Queue Entries: 128 00:22:15.679 Contiguous Queues Required: Yes 00:22:15.679 Arbitration Mechanisms Supported 00:22:15.679 Weighted Round Robin: Not Supported 00:22:15.679 Vendor Specific: Not Supported 00:22:15.679 Reset Timeout: 15000 ms 00:22:15.679 Doorbell Stride: 4 bytes 00:22:15.679 NVM Subsystem Reset: Not Supported 00:22:15.679 Command Sets Supported 00:22:15.679 NVM Command Set: Supported 00:22:15.679 Boot Partition: Not Supported 00:22:15.679 Memory Page Size Minimum: 4096 bytes 00:22:15.679 Memory Page Size Maximum: 4096 bytes 00:22:15.679 Persistent Memory Region: Not Supported 00:22:15.679 Optional Asynchronous Events Supported 00:22:15.679 Namespace Attribute Notices: Supported 00:22:15.679 Firmware Activation Notices: Not Supported 00:22:15.679 ANA Change Notices: Not Supported 00:22:15.679 PLE Aggregate Log Change Notices: Not Supported 00:22:15.679 LBA Status Info Alert Notices: Not Supported 00:22:15.679 EGE Aggregate Log Change Notices: Not Supported 00:22:15.679 Normal NVM Subsystem Shutdown event: Not Supported 00:22:15.679 Zone Descriptor Change Notices: Not Supported 00:22:15.679 Discovery Log Change Notices: Not Supported 00:22:15.679 Controller Attributes 00:22:15.679 128-bit Host Identifier: Supported 00:22:15.679 Non-Operational Permissive Mode: Not Supported 00:22:15.679 NVM Sets: Not Supported 00:22:15.679 Read Recovery Levels: Not Supported 00:22:15.679 Endurance Groups: Not Supported 00:22:15.679 Predictable Latency Mode: Not Supported 00:22:15.679 Traffic Based Keep ALive: Not Supported 00:22:15.679 Namespace Granularity: Not Supported 00:22:15.679 SQ Associations: Not Supported 00:22:15.679 UUID List: Not Supported 00:22:15.679 Multi-Domain Subsystem: Not Supported 00:22:15.679 Fixed Capacity Management: Not Supported 00:22:15.679 Variable Capacity Management: Not Supported 00:22:15.679 Delete Endurance Group: Not Supported 00:22:15.679 Delete NVM Set: Not Supported 00:22:15.679 Extended LBA Formats Supported: Not Supported 00:22:15.679 Flexible Data Placement Supported: Not Supported 00:22:15.679 00:22:15.679 Controller Memory Buffer Support 00:22:15.679 ================================ 00:22:15.679 Supported: No 00:22:15.679 00:22:15.679 Persistent Memory Region Support 00:22:15.679 ================================ 00:22:15.679 Supported: No 00:22:15.679 00:22:15.679 Admin Command Set Attributes 00:22:15.679 ============================ 00:22:15.679 Security Send/Receive: Not Supported 00:22:15.679 Format NVM: Not Supported 00:22:15.679 Firmware Activate/Download: Not Supported 00:22:15.679 Namespace Management: Not Supported 00:22:15.679 Device Self-Test: Not Supported 00:22:15.679 Directives: Not Supported 00:22:15.679 NVMe-MI: Not Supported 00:22:15.679 Virtualization Management: Not Supported 00:22:15.679 Doorbell Buffer Config: Not Supported 00:22:15.679 Get LBA Status Capability: Not Supported 00:22:15.679 Command & Feature Lockdown Capability: Not Supported 00:22:15.679 Abort Command Limit: 4 00:22:15.679 Async Event Request Limit: 4 00:22:15.679 Number of Firmware Slots: N/A 00:22:15.679 Firmware Slot 1 Read-Only: N/A 00:22:15.679 Firmware Activation Without Reset: N/A 00:22:15.679 Multiple Update Detection Support: N/A 00:22:15.679 Firmware Update Granularity: No Information Provided 00:22:15.679 Per-Namespace SMART Log: No 00:22:15.679 Asymmetric Namespace Access Log Page: Not Supported 00:22:15.679 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:15.679 Command Effects Log Page: Supported 00:22:15.679 Get Log Page Extended Data: Supported 00:22:15.679 Telemetry Log Pages: Not Supported 00:22:15.679 Persistent Event Log Pages: Not Supported 00:22:15.679 Supported Log Pages Log Page: May Support 00:22:15.679 Commands Supported & Effects Log Page: Not Supported 00:22:15.679 Feature Identifiers & Effects Log Page:May Support 00:22:15.679 NVMe-MI Commands & Effects Log Page: May Support 00:22:15.679 Data Area 4 for Telemetry Log: Not Supported 00:22:15.679 Error Log Page Entries Supported: 128 00:22:15.679 Keep Alive: Supported 00:22:15.679 Keep Alive Granularity: 10000 ms 00:22:15.679 00:22:15.679 NVM Command Set Attributes 00:22:15.679 ========================== 00:22:15.680 Submission Queue Entry Size 00:22:15.680 Max: 64 00:22:15.680 Min: 64 00:22:15.680 Completion Queue Entry Size 00:22:15.680 Max: 16 00:22:15.680 Min: 16 00:22:15.680 Number of Namespaces: 32 00:22:15.680 Compare Command: Supported 00:22:15.680 Write Uncorrectable Command: Not Supported 00:22:15.680 Dataset Management Command: Supported 00:22:15.680 Write Zeroes Command: Supported 00:22:15.680 Set Features Save Field: Not Supported 00:22:15.680 Reservations: Supported 00:22:15.680 Timestamp: Not Supported 00:22:15.680 Copy: Supported 00:22:15.680 Volatile Write Cache: Present 00:22:15.680 Atomic Write Unit (Normal): 1 00:22:15.680 Atomic Write Unit (PFail): 1 00:22:15.680 Atomic Compare & Write Unit: 1 00:22:15.680 Fused Compare & Write: Supported 00:22:15.680 Scatter-Gather List 00:22:15.680 SGL Command Set: Supported 00:22:15.680 SGL Keyed: Supported 00:22:15.680 SGL Bit Bucket Descriptor: Not Supported 00:22:15.680 SGL Metadata Pointer: Not Supported 00:22:15.680 Oversized SGL: Not Supported 00:22:15.680 SGL Metadata Address: Not Supported 00:22:15.680 SGL Offset: Supported 00:22:15.680 Transport SGL Data Block: Not Supported 00:22:15.680 Replay Protected Memory Block: Not Supported 00:22:15.680 00:22:15.680 Firmware Slot Information 00:22:15.680 ========================= 00:22:15.680 Active slot: 1 00:22:15.680 Slot 1 Firmware Revision: 24.09 00:22:15.680 00:22:15.680 00:22:15.680 Commands Supported and Effects 00:22:15.680 ============================== 00:22:15.680 Admin Commands 00:22:15.680 -------------- 00:22:15.680 Get Log Page (02h): Supported 00:22:15.680 Identify (06h): Supported 00:22:15.680 Abort (08h): Supported 00:22:15.680 Set Features (09h): Supported 00:22:15.680 Get Features (0Ah): Supported 00:22:15.680 Asynchronous Event Request (0Ch): Supported 00:22:15.680 Keep Alive (18h): Supported 00:22:15.680 I/O Commands 00:22:15.680 ------------ 00:22:15.680 Flush (00h): Supported LBA-Change 00:22:15.680 Write (01h): Supported LBA-Change 00:22:15.680 Read (02h): Supported 00:22:15.680 Compare (05h): Supported 00:22:15.680 Write Zeroes (08h): Supported LBA-Change 00:22:15.680 Dataset Management (09h): Supported LBA-Change 00:22:15.680 Copy (19h): Supported LBA-Change 00:22:15.680 00:22:15.680 Error Log 00:22:15.680 ========= 00:22:15.680 00:22:15.680 Arbitration 00:22:15.680 =========== 00:22:15.680 Arbitration Burst: 1 00:22:15.680 00:22:15.680 Power Management 00:22:15.680 ================ 00:22:15.680 Number of Power States: 1 00:22:15.680 Current Power State: Power State #0 00:22:15.680 Power State #0: 00:22:15.680 Max Power: 0.00 W 00:22:15.680 Non-Operational State: Operational 00:22:15.680 Entry Latency: Not Reported 00:22:15.680 Exit Latency: Not Reported 00:22:15.680 Relative Read Throughput: 0 00:22:15.680 Relative Read Latency: 0 00:22:15.680 Relative Write Throughput: 0 00:22:15.680 Relative Write Latency: 0 00:22:15.680 Idle Power: Not Reported 00:22:15.680 Active Power: Not Reported 00:22:15.680 Non-Operational Permissive Mode: Not Supported 00:22:15.680 00:22:15.680 Health Information 00:22:15.680 ================== 00:22:15.680 Critical Warnings: 00:22:15.680 Available Spare Space: OK 00:22:15.680 Temperature: OK 00:22:15.680 Device Reliability: OK 00:22:15.680 Read Only: No 00:22:15.680 Volatile Memory Backup: OK 00:22:15.680 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:15.680 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:15.680 Available Spare: 0% 00:22:15.680 Available Spare Threshold: 0% 00:22:15.680 Life Percentage Used:[2024-07-16 01:27:41.584427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.680 [2024-07-16 01:27:41.584431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2155ec0) 00:22:15.680 [2024-07-16 01:27:41.584437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.680 [2024-07-16 01:27:41.584448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9a40, cid 7, qid 0 00:22:15.680 [2024-07-16 01:27:41.584603] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.680 [2024-07-16 01:27:41.584609] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.680 [2024-07-16 01:27:41.584612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.680 [2024-07-16 01:27:41.584615] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9a40) on tqpair=0x2155ec0 00:22:15.680 [2024-07-16 01:27:41.584640] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:15.680 [2024-07-16 01:27:41.584648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d8fc0) on tqpair=0x2155ec0 00:22:15.680 [2024-07-16 01:27:41.584653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.680 [2024-07-16 01:27:41.584658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9140) on tqpair=0x2155ec0 00:22:15.680 [2024-07-16 01:27:41.584662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.680 [2024-07-16 01:27:41.584666] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d92c0) on tqpair=0x2155ec0 00:22:15.680 [2024-07-16 01:27:41.584669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.680 [2024-07-16 01:27:41.584674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.680 [2024-07-16 01:27:41.584677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.680 [2024-07-16 01:27:41.584684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.680 [2024-07-16 01:27:41.584687] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.680 [2024-07-16 01:27:41.584690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.680 [2024-07-16 01:27:41.584696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.680 [2024-07-16 01:27:41.584706] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.680 [2024-07-16 01:27:41.584775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.680 [2024-07-16 01:27:41.584781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.680 [2024-07-16 01:27:41.584784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.680 [2024-07-16 01:27:41.584787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.680 [2024-07-16 01:27:41.584792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.680 [2024-07-16 01:27:41.584796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.680 [2024-07-16 01:27:41.584798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.680 [2024-07-16 01:27:41.584804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.680 [2024-07-16 01:27:41.584815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.680 [2024-07-16 01:27:41.584895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.680 [2024-07-16 01:27:41.584900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.680 [2024-07-16 01:27:41.584903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.680 [2024-07-16 01:27:41.584906] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.680 [2024-07-16 01:27:41.584910] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:15.680 [2024-07-16 01:27:41.584913] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:15.680 [2024-07-16 01:27:41.584921] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.680 [2024-07-16 01:27:41.584925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.680 [2024-07-16 01:27:41.584927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.680 [2024-07-16 01:27:41.584933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.680 [2024-07-16 01:27:41.584942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.680 [2024-07-16 01:27:41.585013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.680 [2024-07-16 01:27:41.585019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.585022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.585033] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.585045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.585054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.585130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.585136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.585138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.585149] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.585161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.585171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.585232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.585237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.585240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.585251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585257] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.585262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.585271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.585340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.585346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.585349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.585361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.585373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.585382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.585471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.585477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.585480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.585491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.585502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.585511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.585583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.585588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.585591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.585601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.585613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.585622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.585699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.585705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.585707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.585718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.585730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.585738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.585799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.585804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.585807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.585818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.585829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.585838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.585901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.585907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.585909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585913] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.585920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.585927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.585932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.585941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.586001] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.586007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.586009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.586020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.586032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.586041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.586120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.586127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.586130] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.586141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.586153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.586161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.586226] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.586231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.586234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586237] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.586245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.586256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.586265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.681 [2024-07-16 01:27:41.586371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.681 [2024-07-16 01:27:41.586376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.681 [2024-07-16 01:27:41.586379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.681 [2024-07-16 01:27:41.586390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.681 [2024-07-16 01:27:41.586397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.681 [2024-07-16 01:27:41.586402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.681 [2024-07-16 01:27:41.586411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.682 [2024-07-16 01:27:41.586471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.682 [2024-07-16 01:27:41.586477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.682 [2024-07-16 01:27:41.586480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.682 [2024-07-16 01:27:41.586490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.682 [2024-07-16 01:27:41.586502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.682 [2024-07-16 01:27:41.586511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.682 [2024-07-16 01:27:41.586571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.682 [2024-07-16 01:27:41.586577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.682 [2024-07-16 01:27:41.586581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.682 [2024-07-16 01:27:41.586592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.682 [2024-07-16 01:27:41.586604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.682 [2024-07-16 01:27:41.586613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.682 [2024-07-16 01:27:41.586673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.682 [2024-07-16 01:27:41.586678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.682 [2024-07-16 01:27:41.586681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586684] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.682 [2024-07-16 01:27:41.586692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.682 [2024-07-16 01:27:41.586704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.682 [2024-07-16 01:27:41.586713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.682 [2024-07-16 01:27:41.586824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.682 [2024-07-16 01:27:41.586830] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.682 [2024-07-16 01:27:41.586833] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.682 [2024-07-16 01:27:41.586843] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.682 [2024-07-16 01:27:41.586855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.682 [2024-07-16 01:27:41.586863] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.682 [2024-07-16 01:27:41.586976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.682 [2024-07-16 01:27:41.586981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.682 [2024-07-16 01:27:41.586984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.682 [2024-07-16 01:27:41.586995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.586998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.587001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.682 [2024-07-16 01:27:41.587006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.682 [2024-07-16 01:27:41.587015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.682 [2024-07-16 01:27:41.587075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.682 [2024-07-16 01:27:41.587081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.682 [2024-07-16 01:27:41.587084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.587089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.682 [2024-07-16 01:27:41.587097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.587100] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.587103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.682 [2024-07-16 01:27:41.587108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.682 [2024-07-16 01:27:41.587117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.682 [2024-07-16 01:27:41.587177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.682 [2024-07-16 01:27:41.587182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.682 [2024-07-16 01:27:41.587185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.587188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.682 [2024-07-16 01:27:41.587196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.587199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.587202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.682 [2024-07-16 01:27:41.587208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.682 [2024-07-16 01:27:41.587216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.682 [2024-07-16 01:27:41.587279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.682 [2024-07-16 01:27:41.587284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.682 [2024-07-16 01:27:41.587287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.587290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.682 [2024-07-16 01:27:41.587297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.587301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.587304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.682 [2024-07-16 01:27:41.587309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.682 [2024-07-16 01:27:41.587318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.682 [2024-07-16 01:27:41.591344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.682 [2024-07-16 01:27:41.591352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.682 [2024-07-16 01:27:41.591355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.591358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.682 [2024-07-16 01:27:41.591368] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.591371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.591374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2155ec0) 00:22:15.682 [2024-07-16 01:27:41.591380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.682 [2024-07-16 01:27:41.591390] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d9440, cid 3, qid 0 00:22:15.682 [2024-07-16 01:27:41.591577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:15.682 [2024-07-16 01:27:41.591583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:15.682 [2024-07-16 01:27:41.591586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:15.682 [2024-07-16 01:27:41.591589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d9440) on tqpair=0x2155ec0 00:22:15.682 [2024-07-16 01:27:41.591597] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:15.682 0% 00:22:15.682 Data Units Read: 0 00:22:15.682 Data Units Written: 0 00:22:15.682 Host Read Commands: 0 00:22:15.682 Host Write Commands: 0 00:22:15.682 Controller Busy Time: 0 minutes 00:22:15.682 Power Cycles: 0 00:22:15.682 Power On Hours: 0 hours 00:22:15.682 Unsafe Shutdowns: 0 00:22:15.682 Unrecoverable Media Errors: 0 00:22:15.682 Lifetime Error Log Entries: 0 00:22:15.682 Warning Temperature Time: 0 minutes 00:22:15.682 Critical Temperature Time: 0 minutes 00:22:15.682 00:22:15.682 Number of Queues 00:22:15.682 ================ 00:22:15.682 Number of I/O Submission Queues: 127 00:22:15.682 Number of I/O Completion Queues: 127 00:22:15.682 00:22:15.682 Active Namespaces 00:22:15.682 ================= 00:22:15.682 Namespace ID:1 00:22:15.682 Error Recovery Timeout: Unlimited 00:22:15.682 Command Set Identifier: NVM (00h) 00:22:15.682 Deallocate: Supported 00:22:15.682 Deallocated/Unwritten Error: Not Supported 00:22:15.682 Deallocated Read Value: Unknown 00:22:15.682 Deallocate in Write Zeroes: Not Supported 00:22:15.682 Deallocated Guard Field: 0xFFFF 00:22:15.682 Flush: Supported 00:22:15.682 Reservation: Supported 00:22:15.682 Namespace Sharing Capabilities: Multiple Controllers 00:22:15.682 Size (in LBAs): 131072 (0GiB) 00:22:15.682 Capacity (in LBAs): 131072 (0GiB) 00:22:15.682 Utilization (in LBAs): 131072 (0GiB) 00:22:15.682 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:15.682 EUI64: ABCDEF0123456789 00:22:15.682 UUID: 6d6077a4-e3bd-4dc7-96f1-ec60e3e39c7d 00:22:15.682 Thin Provisioning: Not Supported 00:22:15.682 Per-NS Atomic Units: Yes 00:22:15.682 Atomic Boundary Size (Normal): 0 00:22:15.682 Atomic Boundary Size (PFail): 0 00:22:15.682 Atomic Boundary Offset: 0 00:22:15.682 Maximum Single Source Range Length: 65535 00:22:15.682 Maximum Copy Length: 65535 00:22:15.682 Maximum Source Range Count: 1 00:22:15.682 NGUID/EUI64 Never Reused: No 00:22:15.682 Namespace Write Protected: No 00:22:15.682 Number of LBA Formats: 1 00:22:15.682 Current LBA Format: LBA Format #00 00:22:15.682 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:15.682 00:22:15.682 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.683 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.683 rmmod nvme_tcp 00:22:15.683 rmmod nvme_fabrics 00:22:15.683 rmmod nvme_keyring 00:22:15.940 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.940 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:15.940 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:15.940 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3465986 ']' 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3465986 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3465986 ']' 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3465986 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3465986 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3465986' 00:22:15.941 killing process with pid 3465986 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3465986 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3465986 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.941 01:27:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.467 01:27:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:18.467 00:22:18.467 real 0m9.493s 00:22:18.467 user 0m7.775s 00:22:18.467 sys 0m4.590s 00:22:18.467 01:27:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:18.467 01:27:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:18.467 ************************************ 00:22:18.467 END TEST nvmf_identify 00:22:18.467 ************************************ 00:22:18.467 01:27:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:18.467 01:27:44 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:18.467 01:27:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:18.467 01:27:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:18.467 01:27:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:18.467 ************************************ 00:22:18.467 START TEST nvmf_perf 00:22:18.467 ************************************ 00:22:18.467 01:27:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:18.467 * Looking for test storage... 00:22:18.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:18.467 01:27:44 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.467 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:18.467 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.467 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:18.468 01:27:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:23.726 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:23.726 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:23.726 Found net devices under 0000:86:00.0: cvl_0_0 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:23.726 Found net devices under 0000:86:00.1: cvl_0_1 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:23.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:22:23.726 00:22:23.726 --- 10.0.0.2 ping statistics --- 00:22:23.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.726 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:23.726 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:22:23.726 00:22:23.726 --- 10.0.0.1 ping statistics --- 00:22:23.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.727 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3469724 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3469724 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3469724 ']' 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.727 01:27:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:23.727 [2024-07-16 01:27:49.640721] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:22:23.727 [2024-07-16 01:27:49.640765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.727 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.727 [2024-07-16 01:27:49.698824] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.982 [2024-07-16 01:27:49.779605] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.982 [2024-07-16 01:27:49.779639] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.982 [2024-07-16 01:27:49.779648] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.982 [2024-07-16 01:27:49.779653] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.982 [2024-07-16 01:27:49.779661] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.982 [2024-07-16 01:27:49.779704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.982 [2024-07-16 01:27:49.779798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.982 [2024-07-16 01:27:49.779887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.982 [2024-07-16 01:27:49.779888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.543 01:27:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:24.543 01:27:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:24.543 01:27:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:24.543 01:27:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:24.543 01:27:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:24.543 01:27:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.543 01:27:50 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:24.543 01:27:50 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:27.810 01:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:27.810 01:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:27.810 01:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:27.810 01:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:28.066 01:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:28.066 01:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:28.066 01:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:28.066 01:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:28.066 01:27:53 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:28.066 [2024-07-16 01:27:54.042752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.322 01:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.322 01:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:28.322 01:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:28.577 01:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:28.577 01:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:28.834 01:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.834 [2024-07-16 01:27:54.758244] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.834 01:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:29.091 01:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:29.091 01:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:29.091 01:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:29.091 01:27:54 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:30.548 Initializing NVMe Controllers 00:22:30.548 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:30.548 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:30.548 Initialization complete. Launching workers. 00:22:30.548 ======================================================== 00:22:30.548 Latency(us) 00:22:30.548 Device Information : IOPS MiB/s Average min max 00:22:30.548 PCIE (0000:5e:00.0) NSID 1 from core 0: 99956.90 390.46 319.75 33.22 5203.65 00:22:30.548 ======================================================== 00:22:30.548 Total : 99956.90 390.46 319.75 33.22 5203.65 00:22:30.548 00:22:30.548 01:27:56 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:30.548 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.478 Initializing NVMe Controllers 00:22:31.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:31.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:31.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:31.478 Initialization complete. Launching workers. 00:22:31.478 ======================================================== 00:22:31.478 Latency(us) 00:22:31.478 Device Information : IOPS MiB/s Average min max 00:22:31.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 182.00 0.71 5701.93 107.59 45889.75 00:22:31.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.00 0.28 14164.63 7955.35 47900.58 00:22:31.478 ======================================================== 00:22:31.478 Total : 253.00 0.99 8076.84 107.59 47900.58 00:22:31.478 00:22:31.478 01:27:57 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:31.478 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.844 Initializing NVMe Controllers 00:22:32.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:32.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:32.844 Initialization complete. Launching workers. 00:22:32.844 ======================================================== 00:22:32.844 Latency(us) 00:22:32.844 Device Information : IOPS MiB/s Average min max 00:22:32.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11161.54 43.60 2860.23 426.84 16334.39 00:22:32.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3859.47 15.08 8217.91 5195.20 15924.82 00:22:32.844 ======================================================== 00:22:32.844 Total : 15021.01 58.68 4236.82 426.84 16334.39 00:22:32.844 00:22:32.844 01:27:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:32.844 01:27:58 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:32.844 01:27:58 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:32.844 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.370 Initializing NVMe Controllers 00:22:35.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.370 Controller IO queue size 128, less than required. 00:22:35.370 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.370 Controller IO queue size 128, less than required. 00:22:35.370 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:35.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:35.370 Initialization complete. Launching workers. 00:22:35.370 ======================================================== 00:22:35.370 Latency(us) 00:22:35.370 Device Information : IOPS MiB/s Average min max 00:22:35.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1870.17 467.54 69098.27 40031.75 135647.24 00:22:35.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 599.73 149.93 218524.23 119330.29 316937.71 00:22:35.370 ======================================================== 00:22:35.370 Total : 2469.90 617.48 105381.38 40031.75 316937.71 00:22:35.370 00:22:35.370 01:28:01 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:35.370 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.370 No valid NVMe controllers or AIO or URING devices found 00:22:35.370 Initializing NVMe Controllers 00:22:35.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.371 Controller IO queue size 128, less than required. 00:22:35.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.371 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:35.371 Controller IO queue size 128, less than required. 00:22:35.371 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.371 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:35.371 WARNING: Some requested NVMe devices were skipped 00:22:35.371 01:28:01 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:35.371 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.895 Initializing NVMe Controllers 00:22:37.895 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:37.895 Controller IO queue size 128, less than required. 00:22:37.895 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.895 Controller IO queue size 128, less than required. 00:22:37.895 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:37.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:37.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:37.895 Initialization complete. Launching workers. 00:22:37.895 00:22:37.895 ==================== 00:22:37.895 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:37.895 TCP transport: 00:22:37.895 polls: 14071 00:22:37.895 idle_polls: 9837 00:22:37.895 sock_completions: 4234 00:22:37.895 nvme_completions: 6783 00:22:37.895 submitted_requests: 10218 00:22:37.895 queued_requests: 1 00:22:37.895 00:22:37.895 ==================== 00:22:37.895 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:37.895 TCP transport: 00:22:37.895 polls: 17413 00:22:37.895 idle_polls: 12602 00:22:37.895 sock_completions: 4811 00:22:37.895 nvme_completions: 7001 00:22:37.895 submitted_requests: 10538 00:22:37.895 queued_requests: 1 00:22:37.895 ======================================================== 00:22:37.895 Latency(us) 00:22:37.895 Device Information : IOPS MiB/s Average min max 00:22:37.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1695.21 423.80 76897.54 51255.80 131847.26 00:22:37.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1749.70 437.42 72069.90 44749.36 103456.33 00:22:37.895 ======================================================== 00:22:37.895 Total : 3444.91 861.23 74445.54 44749.36 131847.26 00:22:37.895 00:22:37.895 01:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:37.895 01:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:38.154 rmmod nvme_tcp 00:22:38.154 rmmod nvme_fabrics 00:22:38.154 rmmod nvme_keyring 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3469724 ']' 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3469724 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3469724 ']' 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3469724 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.154 01:28:03 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3469724 00:22:38.154 01:28:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:38.154 01:28:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:38.154 01:28:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3469724' 00:22:38.154 killing process with pid 3469724 00:22:38.154 01:28:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3469724 00:22:38.154 01:28:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3469724 00:22:40.051 01:28:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:40.051 01:28:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:40.051 01:28:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:40.051 01:28:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:40.051 01:28:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:40.051 01:28:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.051 01:28:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:40.051 01:28:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.580 01:28:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.580 00:22:42.580 real 0m24.036s 00:22:42.580 user 1m4.716s 00:22:42.580 sys 0m7.452s 00:22:42.580 01:28:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:42.580 01:28:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:42.580 ************************************ 00:22:42.580 END TEST nvmf_perf 00:22:42.580 ************************************ 00:22:42.580 01:28:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:42.580 01:28:08 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:42.580 01:28:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:42.580 01:28:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:42.580 01:28:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:42.580 ************************************ 00:22:42.580 START TEST nvmf_fio_host 00:22:42.580 ************************************ 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:42.580 * Looking for test storage... 00:22:42.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.580 01:28:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:47.843 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:47.843 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:47.843 Found net devices under 0000:86:00.0: cvl_0_0 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:47.843 Found net devices under 0000:86:00.1: cvl_0_1 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.843 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:22:47.844 00:22:47.844 --- 10.0.0.2 ping statistics --- 00:22:47.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.844 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:22:47.844 00:22:47.844 --- 10.0.0.1 ping statistics --- 00:22:47.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.844 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3475735 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3475735 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3475735 ']' 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.844 01:28:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.844 [2024-07-16 01:28:13.496084] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:22:47.844 [2024-07-16 01:28:13.496132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.844 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.844 [2024-07-16 01:28:13.553935] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.844 [2024-07-16 01:28:13.634484] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.844 [2024-07-16 01:28:13.634520] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.844 [2024-07-16 01:28:13.634528] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.844 [2024-07-16 01:28:13.634534] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.844 [2024-07-16 01:28:13.634540] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.844 [2024-07-16 01:28:13.634590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.844 [2024-07-16 01:28:13.634605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.844 [2024-07-16 01:28:13.634623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.844 [2024-07-16 01:28:13.634624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.408 01:28:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.408 01:28:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:48.408 01:28:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:48.670 [2024-07-16 01:28:14.432432] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.670 01:28:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:48.670 01:28:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:48.670 01:28:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:48.670 01:28:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:48.670 Malloc1 00:22:48.927 01:28:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.927 01:28:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:49.185 01:28:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.442 [2024-07-16 01:28:15.207128] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.442 01:28:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:49.442 01:28:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:49.442 01:28:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:49.442 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:49.442 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:49.443 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:49.443 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:49.443 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.443 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:49.443 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:49.443 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.443 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.443 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:49.443 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:49.706 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:49.706 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:49.706 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.706 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:49.706 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:49.706 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:49.706 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:49.706 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:49.706 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:49.706 01:28:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:49.963 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:49.963 fio-3.35 00:22:49.963 Starting 1 thread 00:22:49.963 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.484 00:22:52.484 test: (groupid=0, jobs=1): err= 0: pid=3476226: Tue Jul 16 01:28:18 2024 00:22:52.484 read: IOPS=11.9k, BW=46.6MiB/s (48.8MB/s)(93.4MiB/2005msec) 00:22:52.484 slat (nsec): min=1519, max=504105, avg=1926.80, stdev=4057.98 00:22:52.484 clat (usec): min=3930, max=10324, avg=5915.52, stdev=490.19 00:22:52.484 lat (usec): min=3932, max=10325, avg=5917.45, stdev=490.33 00:22:52.484 clat percentiles (usec): 00:22:52.484 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:22:52.484 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:22:52.484 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:22:52.484 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 9503], 99.95th=[ 9765], 00:22:52.484 | 99.99th=[10290] 00:22:52.484 bw ( KiB/s): min=46608, max=48512, per=99.98%, avg=47672.00, stdev=807.48, samples=4 00:22:52.484 iops : min=11652, max=12128, avg=11918.00, stdev=201.87, samples=4 00:22:52.484 write: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(93.0MiB/2005msec); 0 zone resets 00:22:52.484 slat (nsec): min=1579, max=263547, avg=2008.39, stdev=2463.58 00:22:52.484 clat (usec): min=3051, max=9506, avg=4813.69, stdev=415.51 00:22:52.484 lat (usec): min=3067, max=9559, avg=4815.69, stdev=415.79 00:22:52.484 clat percentiles (usec): 00:22:52.484 | 1.00th=[ 3884], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:22:52.484 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:22:52.484 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5407], 00:22:52.484 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 8455], 99.95th=[ 8717], 00:22:52.484 | 99.99th=[ 9241] 00:22:52.484 bw ( KiB/s): min=47200, max=47936, per=99.98%, avg=47480.00, stdev=318.26, samples=4 00:22:52.484 iops : min=11800, max=11984, avg=11870.00, stdev=79.57, samples=4 00:22:52.484 lat (msec) : 4=0.90%, 10=99.08%, 20=0.02% 00:22:52.484 cpu : usr=67.37%, sys=27.64%, ctx=538, majf=0, minf=3 00:22:52.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:52.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:52.484 issued rwts: total=23900,23803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:52.484 00:22:52.484 Run status group 0 (all jobs): 00:22:52.484 READ: bw=46.6MiB/s (48.8MB/s), 46.6MiB/s-46.6MiB/s (48.8MB/s-48.8MB/s), io=93.4MiB (97.9MB), run=2005-2005msec 00:22:52.484 WRITE: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=93.0MiB (97.5MB), run=2005-2005msec 00:22:52.484 01:28:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:52.484 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:52.484 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:52.484 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:52.484 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:52.484 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:52.484 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:52.485 01:28:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:52.485 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:52.485 fio-3.35 00:22:52.485 Starting 1 thread 00:22:52.485 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.004 00:22:55.004 test: (groupid=0, jobs=1): err= 0: pid=3476797: Tue Jul 16 01:28:20 2024 00:22:55.004 read: IOPS=11.0k, BW=171MiB/s (180MB/s)(344MiB/2004msec) 00:22:55.004 slat (nsec): min=2535, max=88450, avg=2948.33, stdev=1442.85 00:22:55.004 clat (usec): min=1717, max=14845, avg=6778.97, stdev=1660.34 00:22:55.004 lat (usec): min=1720, max=14848, avg=6781.92, stdev=1660.51 00:22:55.004 clat percentiles (usec): 00:22:55.004 | 1.00th=[ 3687], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5342], 00:22:55.004 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:22:55.004 | 70.00th=[ 7504], 80.00th=[ 8094], 90.00th=[ 8717], 95.00th=[ 9634], 00:22:55.004 | 99.00th=[11600], 99.50th=[12518], 99.90th=[14353], 99.95th=[14484], 00:22:55.004 | 99.99th=[14746] 00:22:55.004 bw ( KiB/s): min=84448, max=95070, per=50.03%, avg=87847.50, stdev=5008.95, samples=4 00:22:55.004 iops : min= 5278, max= 5941, avg=5490.25, stdev=312.64, samples=4 00:22:55.004 write: IOPS=6326, BW=98.9MiB/s (104MB/s)(180MiB/1820msec); 0 zone resets 00:22:55.004 slat (usec): min=28, max=323, avg=32.38, stdev= 6.99 00:22:55.004 clat (usec): min=2652, max=15741, avg=8589.29, stdev=1482.62 00:22:55.004 lat (usec): min=2684, max=15857, avg=8621.67, stdev=1484.37 00:22:55.004 clat percentiles (usec): 00:22:55.004 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7373], 00:22:55.004 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:22:55.004 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11207], 00:22:55.004 | 99.00th=[12387], 99.50th=[13173], 99.90th=[15270], 99.95th=[15533], 00:22:55.004 | 99.99th=[15664] 00:22:55.004 bw ( KiB/s): min=88416, max=98853, per=90.52%, avg=91633.25, stdev=4849.72, samples=4 00:22:55.004 iops : min= 5526, max= 6178, avg=5727.00, stdev=302.95, samples=4 00:22:55.004 lat (msec) : 2=0.04%, 4=1.49%, 10=89.95%, 20=8.53% 00:22:55.004 cpu : usr=83.68%, sys=14.57%, ctx=84, majf=0, minf=2 00:22:55.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:55.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:55.004 issued rwts: total=21994,11515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:55.004 00:22:55.004 Run status group 0 (all jobs): 00:22:55.004 READ: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=344MiB (360MB), run=2004-2004msec 00:22:55.004 WRITE: bw=98.9MiB/s (104MB/s), 98.9MiB/s-98.9MiB/s (104MB/s-104MB/s), io=180MiB (189MB), run=1820-1820msec 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:55.004 rmmod nvme_tcp 00:22:55.004 rmmod nvme_fabrics 00:22:55.004 rmmod nvme_keyring 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3475735 ']' 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3475735 00:22:55.004 01:28:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3475735 ']' 00:22:55.005 01:28:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3475735 00:22:55.005 01:28:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:55.005 01:28:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.005 01:28:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3475735 00:22:55.262 01:28:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:55.262 01:28:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:55.262 01:28:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3475735' 00:22:55.262 killing process with pid 3475735 00:22:55.262 01:28:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3475735 00:22:55.262 01:28:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3475735 00:22:55.262 01:28:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:55.262 01:28:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:55.263 01:28:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:55.263 01:28:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:55.263 01:28:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:55.263 01:28:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.263 01:28:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.263 01:28:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.786 01:28:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:57.786 00:22:57.786 real 0m15.128s 00:22:57.786 user 0m46.552s 00:22:57.786 sys 0m5.876s 00:22:57.786 01:28:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:57.786 01:28:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.786 ************************************ 00:22:57.786 END TEST nvmf_fio_host 00:22:57.786 ************************************ 00:22:57.786 01:28:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:57.786 01:28:23 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:57.786 01:28:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:57.786 01:28:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:57.786 01:28:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:57.786 ************************************ 00:22:57.786 START TEST nvmf_failover 00:22:57.786 ************************************ 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:57.786 * Looking for test storage... 00:22:57.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.786 01:28:23 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:57.787 01:28:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:03.045 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.045 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:03.046 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:03.046 Found net devices under 0000:86:00.0: cvl_0_0 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:03.046 Found net devices under 0000:86:00.1: cvl_0_1 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:03.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:23:03.046 00:23:03.046 --- 10.0.0.2 ping statistics --- 00:23:03.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.046 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:23:03.046 00:23:03.046 --- 10.0.0.1 ping statistics --- 00:23:03.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.046 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3480577 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3480577 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3480577 ']' 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.046 01:28:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.046 [2024-07-16 01:28:28.899524] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:23:03.046 [2024-07-16 01:28:28.899568] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.046 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.046 [2024-07-16 01:28:28.957810] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:03.303 [2024-07-16 01:28:29.044812] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.303 [2024-07-16 01:28:29.044846] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.303 [2024-07-16 01:28:29.044855] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.303 [2024-07-16 01:28:29.044861] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.303 [2024-07-16 01:28:29.044867] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.303 [2024-07-16 01:28:29.044900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.303 [2024-07-16 01:28:29.044924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.303 [2024-07-16 01:28:29.044925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.879 01:28:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.879 01:28:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:03.879 01:28:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:03.879 01:28:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:03.879 01:28:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.879 01:28:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.879 01:28:29 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:04.192 [2024-07-16 01:28:29.900523] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.192 01:28:29 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:04.192 Malloc0 00:23:04.192 01:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:04.478 01:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:04.737 01:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:04.737 [2024-07-16 01:28:30.672275] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.737 01:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:04.994 [2024-07-16 01:28:30.848803] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:04.994 01:28:30 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:05.252 [2024-07-16 01:28:31.021349] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:05.252 01:28:31 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:05.252 01:28:31 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3481029 00:23:05.252 01:28:31 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.252 01:28:31 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3481029 /var/tmp/bdevperf.sock 00:23:05.252 01:28:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3481029 ']' 00:23:05.252 01:28:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.252 01:28:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.252 01:28:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.252 01:28:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.252 01:28:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:06.185 01:28:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.185 01:28:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:06.185 01:28:31 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:06.442 NVMe0n1 00:23:06.442 01:28:32 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:06.699 00:23:06.699 01:28:32 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:06.699 01:28:32 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3481259 00:23:06.699 01:28:32 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:07.631 01:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.889 [2024-07-16 01:28:33.750724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750890] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750912] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.750994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751060] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.889 [2024-07-16 01:28:33.751072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751093] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751118] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751217] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 [2024-07-16 01:28:33.751228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa74160 is same with the state(5) to be set 00:23:07.890 01:28:33 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:11.166 01:28:36 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:11.425 00:23:11.425 01:28:37 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:11.425 [2024-07-16 01:28:37.403179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403296] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403613] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403618] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.426 [2024-07-16 01:28:37.403648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa75090 is same with the state(5) to be set 00:23:11.684 01:28:37 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:14.969 01:28:40 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.969 [2024-07-16 01:28:40.591414] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.969 01:28:40 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:15.905 01:28:41 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:15.905 01:28:41 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3481259 00:23:22.479 0 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3481029 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3481029 ']' 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3481029 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3481029 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3481029' 00:23:22.479 killing process with pid 3481029 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3481029 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3481029 00:23:22.479 01:28:47 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:22.479 [2024-07-16 01:28:31.082671] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:23:22.479 [2024-07-16 01:28:31.082721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481029 ] 00:23:22.479 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.479 [2024-07-16 01:28:31.139525] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.479 [2024-07-16 01:28:31.214208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.479 Running I/O for 15 seconds... 00:23:22.479 [2024-07-16 01:28:33.752660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.479 [2024-07-16 01:28:33.752931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.479 [2024-07-16 01:28:33.752947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.479 [2024-07-16 01:28:33.752955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.479 [2024-07-16 01:28:33.752961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.752969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.752975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.752983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.752989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.752997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.480 [2024-07-16 01:28:33.753541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.480 [2024-07-16 01:28:33.753549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.481 [2024-07-16 01:28:33.753556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.481 [2024-07-16 01:28:33.753574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.481 [2024-07-16 01:28:33.753592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.481 [2024-07-16 01:28:33.753610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.481 [2024-07-16 01:28:33.753628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.481 [2024-07-16 01:28:33.753644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.481 [2024-07-16 01:28:33.753659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.753988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.753995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.481 [2024-07-16 01:28:33.754176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.481 [2024-07-16 01:28:33.754183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.482 [2024-07-16 01:28:33.754359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99536 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99544 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99552 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99560 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99568 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99576 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99584 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99592 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99600 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99608 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99616 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99624 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99632 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.754693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.754699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.754708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99640 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.754716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.764472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.764486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.764495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99648 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.764504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.764512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.482 [2024-07-16 01:28:33.764519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.482 [2024-07-16 01:28:33.764529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99656 len:8 PRP1 0x0 PRP2 0x0 00:23:22.482 [2024-07-16 01:28:33.764538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.764583] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13d78b0 was disconnected and freed. reset controller. 00:23:22.482 [2024-07-16 01:28:33.764594] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:22.482 [2024-07-16 01:28:33.764622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.482 [2024-07-16 01:28:33.764632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.764642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.482 [2024-07-16 01:28:33.764652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.482 [2024-07-16 01:28:33.764662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.483 [2024-07-16 01:28:33.764670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:33.764679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.483 [2024-07-16 01:28:33.764689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:33.764705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:22.483 [2024-07-16 01:28:33.764734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b1670 (9): Bad file descriptor 00:23:22.483 [2024-07-16 01:28:33.768472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.483 [2024-07-16 01:28:33.841423] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:22.483 [2024-07-16 01:28:37.405010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.483 [2024-07-16 01:28:37.405452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.483 [2024-07-16 01:28:37.405467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.483 [2024-07-16 01:28:37.405482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.483 [2024-07-16 01:28:37.405496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.483 [2024-07-16 01:28:37.405504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.405987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.405995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.406003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.406011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.406017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.406025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.406031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.406040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.406046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.406054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.406062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.406070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.406076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.406085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.406091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.406099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.406105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.406114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.484 [2024-07-16 01:28:37.406121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.484 [2024-07-16 01:28:37.406128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.485 [2024-07-16 01:28:37.406296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.485 [2024-07-16 01:28:37.406402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55976 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.485 [2024-07-16 01:28:37.406477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.485 [2024-07-16 01:28:37.406492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.485 [2024-07-16 01:28:37.406505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.485 [2024-07-16 01:28:37.406520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b1670 is same with the state(5) to be set 00:23:22.485 [2024-07-16 01:28:37.406682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.485 [2024-07-16 01:28:37.406689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55984 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.485 [2024-07-16 01:28:37.406714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55992 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.485 [2024-07-16 01:28:37.406738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56000 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.485 [2024-07-16 01:28:37.406761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56008 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.485 [2024-07-16 01:28:37.406786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56016 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.485 [2024-07-16 01:28:37.406811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56024 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.485 [2024-07-16 01:28:37.406836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56032 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.485 [2024-07-16 01:28:37.406859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56040 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.485 [2024-07-16 01:28:37.406885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56048 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.485 [2024-07-16 01:28:37.406909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56056 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.485 [2024-07-16 01:28:37.406927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.485 [2024-07-16 01:28:37.406932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.485 [2024-07-16 01:28:37.406937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56064 len:8 PRP1 0x0 PRP2 0x0 00:23:22.485 [2024-07-16 01:28:37.406943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.406949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.406954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.406959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56072 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.406966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.406973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.406978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.406983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56080 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.406994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.407002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.407007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.407013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56088 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.407019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.407026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.407031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.407037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56096 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.407043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.407049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.407054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.407060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56104 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.407066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.407073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.407078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.407084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56112 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.407090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.407097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.407101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.407107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56120 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.407113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.407119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.407125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.407130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56128 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.407135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.407142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.407147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.407152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55336 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.407159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.407165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.407170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.407175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55344 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55352 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55360 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55368 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55376 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55384 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55392 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55400 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55408 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55416 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55424 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55432 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55440 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55448 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55456 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.486 [2024-07-16 01:28:37.418389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.486 [2024-07-16 01:28:37.418394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.486 [2024-07-16 01:28:37.418399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55112 len:8 PRP1 0x0 PRP2 0x0 00:23:22.486 [2024-07-16 01:28:37.418405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55120 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55128 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55136 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55144 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55152 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55160 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55168 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55176 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55184 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55192 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55200 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55208 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55216 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55224 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55232 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55240 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55248 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55256 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55264 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55272 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55280 len:8 PRP1 0x0 PRP2 0x0 00:23:22.487 [2024-07-16 01:28:37.418910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.487 [2024-07-16 01:28:37.418917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.487 [2024-07-16 01:28:37.418922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.487 [2024-07-16 01:28:37.418927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55288 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.418934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.418941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.418946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.418952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55296 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.418958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.418964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.418970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.418976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55304 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.418983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.418989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.418994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.418999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55312 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55320 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55464 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55472 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55480 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55488 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55496 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55504 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55512 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55520 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55528 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55536 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55544 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55552 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55560 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55568 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55576 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55584 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55592 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.419442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.419449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.419454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55600 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.419461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.425967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.425978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.425986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55608 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.425996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.426005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.426011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.488 [2024-07-16 01:28:37.426018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55616 len:8 PRP1 0x0 PRP2 0x0 00:23:22.488 [2024-07-16 01:28:37.426027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.488 [2024-07-16 01:28:37.426036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.488 [2024-07-16 01:28:37.426042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55624 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55632 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55640 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55648 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55656 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55664 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55672 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55680 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55688 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55696 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55704 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55712 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426423] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55720 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55728 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55736 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55744 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55752 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55760 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55768 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55776 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55784 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55792 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55800 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55808 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.489 [2024-07-16 01:28:37.426822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55816 len:8 PRP1 0x0 PRP2 0x0 00:23:22.489 [2024-07-16 01:28:37.426830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.489 [2024-07-16 01:28:37.426840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.489 [2024-07-16 01:28:37.426847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.426854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55824 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.426862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.426871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.426879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.426886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55832 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.426894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.426904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.426911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.426918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55840 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.426926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.426936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.426943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.426951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55848 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.426959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.426968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.426976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.426984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55856 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.426992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55864 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55872 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55880 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427104] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55888 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55896 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55904 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55912 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55328 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55920 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55928 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55936 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55944 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55952 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55960 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55968 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.490 [2024-07-16 01:28:37.427496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.490 [2024-07-16 01:28:37.427503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55976 len:8 PRP1 0x0 PRP2 0x0 00:23:22.490 [2024-07-16 01:28:37.427511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:37.427557] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13e03c0 was disconnected and freed. reset controller. 00:23:22.490 [2024-07-16 01:28:37.427570] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:22.490 [2024-07-16 01:28:37.427579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:22.490 [2024-07-16 01:28:37.427617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b1670 (9): Bad file descriptor 00:23:22.490 [2024-07-16 01:28:37.431656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.490 [2024-07-16 01:28:37.465895] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:22.490 [2024-07-16 01:28:41.802912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.490 [2024-07-16 01:28:41.802961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:41.802977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.490 [2024-07-16 01:28:41.802985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:41.802994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.490 [2024-07-16 01:28:41.803001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:41.803009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.490 [2024-07-16 01:28:41.803015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.490 [2024-07-16 01:28:41.803025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.490 [2024-07-16 01:28:41.803032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.491 [2024-07-16 01:28:41.803559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.491 [2024-07-16 01:28:41.803665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.491 [2024-07-16 01:28:41.803673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.803985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.803993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.492 [2024-07-16 01:28:41.804224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.492 [2024-07-16 01:28:41.804232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.493 [2024-07-16 01:28:41.804551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:22.493 [2024-07-16 01:28:41.804566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.493 [2024-07-16 01:28:41.804886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.493 [2024-07-16 01:28:41.804896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.494 [2024-07-16 01:28:41.804903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.494 [2024-07-16 01:28:41.804911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.494 [2024-07-16 01:28:41.804918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.494 [2024-07-16 01:28:41.804926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ed8a0 is same with the state(5) to be set 00:23:22.494 [2024-07-16 01:28:41.804935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.494 [2024-07-16 01:28:41.804941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.494 [2024-07-16 01:28:41.804947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66024 len:8 PRP1 0x0 PRP2 0x0 00:23:22.494 [2024-07-16 01:28:41.804953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.494 [2024-07-16 01:28:41.804995] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13ed8a0 was disconnected and freed. reset controller. 00:23:22.494 [2024-07-16 01:28:41.805005] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:22.494 [2024-07-16 01:28:41.805026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.494 [2024-07-16 01:28:41.805034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.494 [2024-07-16 01:28:41.805042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.494 [2024-07-16 01:28:41.805049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.494 [2024-07-16 01:28:41.805056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.494 [2024-07-16 01:28:41.805062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.494 [2024-07-16 01:28:41.805070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:22.494 [2024-07-16 01:28:41.805077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.494 [2024-07-16 01:28:41.805084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:22.494 [2024-07-16 01:28:41.807857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.494 [2024-07-16 01:28:41.807890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b1670 (9): Bad file descriptor 00:23:22.494 [2024-07-16 01:28:41.842258] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:22.494 00:23:22.494 Latency(us) 00:23:22.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.494 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:22.494 Verification LBA range: start 0x0 length 0x4000 00:23:22.494 NVMe0n1 : 15.01 11202.50 43.76 410.78 0.00 11000.39 411.55 31457.28 00:23:22.494 =================================================================================================================== 00:23:22.494 Total : 11202.50 43.76 410.78 0.00 11000.39 411.55 31457.28 00:23:22.494 Received shutdown signal, test time was about 15.000000 seconds 00:23:22.494 00:23:22.494 Latency(us) 00:23:22.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.494 =================================================================================================================== 00:23:22.494 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3483778 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3483778 /var/tmp/bdevperf.sock 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3483778 ']' 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.494 01:28:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:23.061 01:28:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.061 01:28:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:23.061 01:28:48 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:23.061 [2024-07-16 01:28:48.988784] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.061 01:28:49 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:23.320 [2024-07-16 01:28:49.173263] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:23.320 01:28:49 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:23.578 NVMe0n1 00:23:23.578 01:28:49 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:23.836 00:23:24.095 01:28:49 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.353 00:23:24.353 01:28:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.353 01:28:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:24.612 01:28:50 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.612 01:28:50 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:27.894 01:28:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:27.894 01:28:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:27.894 01:28:53 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3484712 00:23:27.894 01:28:53 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.894 01:28:53 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3484712 00:23:29.270 0 00:23:29.270 01:28:54 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:29.270 [2024-07-16 01:28:48.008698] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:23:29.270 [2024-07-16 01:28:48.008748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483778 ] 00:23:29.270 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.270 [2024-07-16 01:28:48.064991] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.270 [2024-07-16 01:28:48.133018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.270 [2024-07-16 01:28:50.530460] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:29.270 [2024-07-16 01:28:50.530509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.270 [2024-07-16 01:28:50.530520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.270 [2024-07-16 01:28:50.530529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.270 [2024-07-16 01:28:50.530536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.270 [2024-07-16 01:28:50.530543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.270 [2024-07-16 01:28:50.530549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.270 [2024-07-16 01:28:50.530556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.270 [2024-07-16 01:28:50.530563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.270 [2024-07-16 01:28:50.530569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:29.270 [2024-07-16 01:28:50.530599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:29.270 [2024-07-16 01:28:50.530614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165a670 (9): Bad file descriptor 00:23:29.270 [2024-07-16 01:28:50.582354] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:29.270 Running I/O for 1 seconds... 00:23:29.270 00:23:29.270 Latency(us) 00:23:29.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.270 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:29.270 Verification LBA range: start 0x0 length 0x4000 00:23:29.270 NVMe0n1 : 1.01 11266.97 44.01 0.00 0.00 11316.81 1084.46 8862.96 00:23:29.270 =================================================================================================================== 00:23:29.270 Total : 11266.97 44.01 0.00 0.00 11316.81 1084.46 8862.96 00:23:29.270 01:28:54 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.270 01:28:54 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:29.270 01:28:55 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:29.270 01:28:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:29.270 01:28:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:29.528 01:28:55 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:29.786 01:28:55 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3483778 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3483778 ']' 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3483778 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3483778 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3483778' 00:23:33.073 killing process with pid 3483778 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3483778 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3483778 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:33.073 01:28:58 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.332 rmmod nvme_tcp 00:23:33.332 rmmod nvme_fabrics 00:23:33.332 rmmod nvme_keyring 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3480577 ']' 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3480577 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3480577 ']' 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3480577 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3480577 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3480577' 00:23:33.332 killing process with pid 3480577 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3480577 00:23:33.332 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3480577 00:23:33.591 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.591 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.591 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.591 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.591 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.591 01:28:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.591 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.591 01:28:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.619 01:29:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.877 00:23:35.877 real 0m38.201s 00:23:35.877 user 2m3.403s 00:23:35.877 sys 0m7.343s 00:23:35.877 01:29:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:35.877 01:29:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:35.877 ************************************ 00:23:35.877 END TEST nvmf_failover 00:23:35.877 ************************************ 00:23:35.877 01:29:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:35.877 01:29:01 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:35.877 01:29:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:35.877 01:29:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:35.877 01:29:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:35.877 ************************************ 00:23:35.877 START TEST nvmf_host_discovery 00:23:35.877 ************************************ 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:35.877 * Looking for test storage... 00:23:35.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.877 01:29:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.148 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:41.149 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:41.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:41.149 Found net devices under 0000:86:00.0: cvl_0_0 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:41.149 Found net devices under 0000:86:00.1: cvl_0_1 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:23:41.149 00:23:41.149 --- 10.0.0.2 ping statistics --- 00:23:41.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.149 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:23:41.149 00:23:41.149 --- 10.0.0.1 ping statistics --- 00:23:41.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.149 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.149 01:29:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3488949 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3488949 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3488949 ']' 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.149 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.149 [2024-07-16 01:29:07.063689] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:23:41.149 [2024-07-16 01:29:07.063735] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.149 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.149 [2024-07-16 01:29:07.124474] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.408 [2024-07-16 01:29:07.202436] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.408 [2024-07-16 01:29:07.202470] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.408 [2024-07-16 01:29:07.202476] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.408 [2024-07-16 01:29:07.202482] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.408 [2024-07-16 01:29:07.202487] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.408 [2024-07-16 01:29:07.202503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.975 [2024-07-16 01:29:07.891567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.975 [2024-07-16 01:29:07.903700] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.975 null0 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.975 null1 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3489191 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3489191 /tmp/host.sock 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3489191 ']' 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:41.975 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.975 01:29:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.234 [2024-07-16 01:29:07.978981] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:23:42.234 [2024-07-16 01:29:07.979023] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489191 ] 00:23:42.234 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.234 [2024-07-16 01:29:08.033985] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.234 [2024-07-16 01:29:08.112674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.801 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.060 01:29:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.060 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.060 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:43.060 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:43.060 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.060 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.060 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.060 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.060 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.060 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.318 [2024-07-16 01:29:09.090814] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:43.318 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:43.319 01:29:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:43.886 [2024-07-16 01:29:09.831835] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:43.886 [2024-07-16 01:29:09.831854] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:43.886 [2024-07-16 01:29:09.831865] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:44.145 [2024-07-16 01:29:09.958248] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:44.145 [2024-07-16 01:29:10.022100] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:44.145 [2024-07-16 01:29:10.022120] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:44.403 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.662 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:44.662 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:44.662 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:44.662 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:44.662 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.663 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.922 [2024-07-16 01:29:10.683124] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:44.922 [2024-07-16 01:29:10.683721] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:44.922 [2024-07-16 01:29:10.683743] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.922 [2024-07-16 01:29:10.812431] bdev_nvme.c:6915:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:44.922 01:29:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:45.181 [2024-07-16 01:29:10.916159] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:45.181 [2024-07-16 01:29:10.916178] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:45.181 [2024-07-16 01:29:10.916183] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:46.116 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.117 [2024-07-16 01:29:11.951191] bdev_nvme.c:6973:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:46.117 [2024-07-16 01:29:11.951213] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:46.117 [2024-07-16 01:29:11.959829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.117 [2024-07-16 01:29:11.959850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.117 [2024-07-16 01:29:11.959859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.117 [2024-07-16 01:29:11.959866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.117 [2024-07-16 01:29:11.959873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.117 [2024-07-16 01:29:11.959881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.117 [2024-07-16 01:29:11.959888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.117 [2024-07-16 01:29:11.959894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.117 [2024-07-16 01:29:11.959901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98b0b0 is same with the state(5) to be set 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.117 [2024-07-16 01:29:11.969842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98b0b0 (9): Bad file descriptor 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.117 [2024-07-16 01:29:11.979880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:46.117 [2024-07-16 01:29:11.980082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.117 [2024-07-16 01:29:11.980097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x98b0b0 with addr=10.0.0.2, port=4420 00:23:46.117 [2024-07-16 01:29:11.980105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98b0b0 is same with the state(5) to be set 00:23:46.117 [2024-07-16 01:29:11.980117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98b0b0 (9): Bad file descriptor 00:23:46.117 [2024-07-16 01:29:11.980133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:46.117 [2024-07-16 01:29:11.980144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:46.117 [2024-07-16 01:29:11.980151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:46.117 [2024-07-16 01:29:11.980162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.117 [2024-07-16 01:29:11.989938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:46.117 [2024-07-16 01:29:11.990123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.117 [2024-07-16 01:29:11.990136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x98b0b0 with addr=10.0.0.2, port=4420 00:23:46.117 [2024-07-16 01:29:11.990143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98b0b0 is same with the state(5) to be set 00:23:46.117 [2024-07-16 01:29:11.990153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98b0b0 (9): Bad file descriptor 00:23:46.117 [2024-07-16 01:29:11.990163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:46.117 [2024-07-16 01:29:11.990169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:46.117 [2024-07-16 01:29:11.990176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:46.117 [2024-07-16 01:29:11.990185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:46.117 [2024-07-16 01:29:11.999989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:46.117 [2024-07-16 01:29:12.000900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.117 [2024-07-16 01:29:12.000922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x98b0b0 with addr=10.0.0.2, port=4420 00:23:46.117 [2024-07-16 01:29:12.000931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98b0b0 is same with the state(5) to be set 00:23:46.117 [2024-07-16 01:29:12.000945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98b0b0 (9): Bad file descriptor 00:23:46.117 [2024-07-16 01:29:12.000973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:46.117 [2024-07-16 01:29:12.000982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:46.117 [2024-07-16 01:29:12.000989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:46.117 [2024-07-16 01:29:12.001000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:46.117 01:29:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:46.117 [2024-07-16 01:29:12.010044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:46.117 [2024-07-16 01:29:12.010175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.117 [2024-07-16 01:29:12.010189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x98b0b0 with addr=10.0.0.2, port=4420 00:23:46.117 [2024-07-16 01:29:12.010196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98b0b0 is same with the state(5) to be set 00:23:46.117 [2024-07-16 01:29:12.010208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98b0b0 (9): Bad file descriptor 00:23:46.117 [2024-07-16 01:29:12.010218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:46.117 [2024-07-16 01:29:12.010224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:46.117 [2024-07-16 01:29:12.010231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:46.117 [2024-07-16 01:29:12.010241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.117 [2024-07-16 01:29:12.020098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:46.117 [2024-07-16 01:29:12.020383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.117 [2024-07-16 01:29:12.020397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x98b0b0 with addr=10.0.0.2, port=4420 00:23:46.117 [2024-07-16 01:29:12.020404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98b0b0 is same with the state(5) to be set 00:23:46.117 [2024-07-16 01:29:12.020415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98b0b0 (9): Bad file descriptor 00:23:46.117 [2024-07-16 01:29:12.020424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:46.117 [2024-07-16 01:29:12.020430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:46.117 [2024-07-16 01:29:12.020437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:46.117 [2024-07-16 01:29:12.020447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.117 [2024-07-16 01:29:12.030148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:46.117 [2024-07-16 01:29:12.030407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.117 [2024-07-16 01:29:12.030419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x98b0b0 with addr=10.0.0.2, port=4420 00:23:46.117 [2024-07-16 01:29:12.030426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98b0b0 is same with the state(5) to be set 00:23:46.117 [2024-07-16 01:29:12.030436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98b0b0 (9): Bad file descriptor 00:23:46.117 [2024-07-16 01:29:12.030446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:46.117 [2024-07-16 01:29:12.030452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:46.117 [2024-07-16 01:29:12.030459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:46.117 [2024-07-16 01:29:12.030468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.117 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.117 [2024-07-16 01:29:12.037687] bdev_nvme.c:6778:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:46.117 [2024-07-16 01:29:12.037707] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:46.117 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.118 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.377 01:29:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:47.762 [2024-07-16 01:29:13.360478] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:47.762 [2024-07-16 01:29:13.360495] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:47.762 [2024-07-16 01:29:13.360506] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:47.762 [2024-07-16 01:29:13.446761] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:47.762 [2024-07-16 01:29:13.668490] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:47.762 [2024-07-16 01:29:13.668516] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:47.762 request: 00:23:47.762 { 00:23:47.762 "name": "nvme", 00:23:47.762 "trtype": "tcp", 00:23:47.762 "traddr": "10.0.0.2", 00:23:47.762 "adrfam": "ipv4", 00:23:47.762 "trsvcid": "8009", 00:23:47.762 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:47.762 "wait_for_attach": true, 00:23:47.762 "method": "bdev_nvme_start_discovery", 00:23:47.762 "req_id": 1 00:23:47.762 } 00:23:47.762 Got JSON-RPC error response 00:23:47.762 response: 00:23:47.762 { 00:23:47.762 "code": -17, 00:23:47.762 "message": "File exists" 00:23:47.762 } 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:47.762 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.021 request: 00:23:48.021 { 00:23:48.021 "name": "nvme_second", 00:23:48.021 "trtype": "tcp", 00:23:48.021 "traddr": "10.0.0.2", 00:23:48.021 "adrfam": "ipv4", 00:23:48.021 "trsvcid": "8009", 00:23:48.021 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:48.021 "wait_for_attach": true, 00:23:48.021 "method": "bdev_nvme_start_discovery", 00:23:48.021 "req_id": 1 00:23:48.021 } 00:23:48.021 Got JSON-RPC error response 00:23:48.021 response: 00:23:48.021 { 00:23:48.021 "code": -17, 00:23:48.021 "message": "File exists" 00:23:48.021 } 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.021 01:29:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.958 [2024-07-16 01:29:14.907934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.958 [2024-07-16 01:29:14.907963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x98a980 with addr=10.0.0.2, port=8010 00:23:48.958 [2024-07-16 01:29:14.907976] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:48.958 [2024-07-16 01:29:14.907983] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:48.958 [2024-07-16 01:29:14.907989] bdev_nvme.c:7053:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:50.334 [2024-07-16 01:29:15.910310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.334 [2024-07-16 01:29:15.910340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x98a980 with addr=10.0.0.2, port=8010 00:23:50.334 [2024-07-16 01:29:15.910353] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:50.334 [2024-07-16 01:29:15.910359] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:50.334 [2024-07-16 01:29:15.910365] bdev_nvme.c:7053:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:51.269 [2024-07-16 01:29:16.912560] bdev_nvme.c:7034:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:51.269 request: 00:23:51.269 { 00:23:51.269 "name": "nvme_second", 00:23:51.269 "trtype": "tcp", 00:23:51.269 "traddr": "10.0.0.2", 00:23:51.269 "adrfam": "ipv4", 00:23:51.269 "trsvcid": "8010", 00:23:51.269 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:51.269 "wait_for_attach": false, 00:23:51.269 "attach_timeout_ms": 3000, 00:23:51.269 "method": "bdev_nvme_start_discovery", 00:23:51.269 "req_id": 1 00:23:51.269 } 00:23:51.269 Got JSON-RPC error response 00:23:51.269 response: 00:23:51.269 { 00:23:51.269 "code": -110, 00:23:51.269 "message": "Connection timed out" 00:23:51.269 } 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3489191 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:51.269 01:29:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:51.269 rmmod nvme_tcp 00:23:51.269 rmmod nvme_fabrics 00:23:51.269 rmmod nvme_keyring 00:23:51.269 01:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:51.269 01:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:51.269 01:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3488949 ']' 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3488949 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3488949 ']' 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3488949 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3488949 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3488949' 00:23:51.270 killing process with pid 3488949 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3488949 00:23:51.270 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3488949 00:23:51.528 01:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:51.528 01:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:51.528 01:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:51.528 01:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:51.528 01:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:51.528 01:29:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.528 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.528 01:29:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.431 01:29:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:53.431 00:23:53.431 real 0m17.706s 00:23:53.431 user 0m22.520s 00:23:53.431 sys 0m5.219s 00:23:53.431 01:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:53.431 01:29:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.431 ************************************ 00:23:53.431 END TEST nvmf_host_discovery 00:23:53.431 ************************************ 00:23:53.431 01:29:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:53.431 01:29:19 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:53.431 01:29:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:53.431 01:29:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:53.431 01:29:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:53.431 ************************************ 00:23:53.431 START TEST nvmf_host_multipath_status 00:23:53.431 ************************************ 00:23:53.431 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:53.688 * Looking for test storage... 00:23:53.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:53.688 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.688 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:53.688 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.688 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.688 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.688 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:53.689 01:29:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.970 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:58.971 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:58.971 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:58.971 Found net devices under 0000:86:00.0: cvl_0_0 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:58.971 Found net devices under 0000:86:00.1: cvl_0_1 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:58.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:23:58.971 00:23:58.971 --- 10.0.0.2 ping statistics --- 00:23:58.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.971 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:23:58.971 00:23:58.971 --- 10.0.0.1 ping statistics --- 00:23:58.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.971 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:23:58.971 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.972 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:59.230 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3494263 00:23:59.230 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3494263 00:23:59.230 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:59.230 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3494263 ']' 00:23:59.230 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.230 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.230 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.230 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.230 01:29:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:59.230 [2024-07-16 01:29:25.009553] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:23:59.230 [2024-07-16 01:29:25.009599] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.230 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.230 [2024-07-16 01:29:25.070271] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:59.230 [2024-07-16 01:29:25.149109] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.230 [2024-07-16 01:29:25.149144] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.230 [2024-07-16 01:29:25.149151] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.230 [2024-07-16 01:29:25.149157] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.230 [2024-07-16 01:29:25.149162] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.230 [2024-07-16 01:29:25.149205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.230 [2024-07-16 01:29:25.149208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.171 01:29:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.171 01:29:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:00.171 01:29:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.171 01:29:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.171 01:29:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:00.171 01:29:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.171 01:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3494263 00:24:00.171 01:29:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:00.171 [2024-07-16 01:29:25.992788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.171 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:00.429 Malloc0 00:24:00.429 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:00.429 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:00.687 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.946 [2024-07-16 01:29:26.682866] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.946 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:00.946 [2024-07-16 01:29:26.839258] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:00.946 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3494539 00:24:00.946 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:00.946 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:00.946 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3494539 /var/tmp/bdevperf.sock 00:24:00.946 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3494539 ']' 00:24:00.946 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.946 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.946 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.946 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.946 01:29:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:01.881 01:29:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.881 01:29:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:01.881 01:29:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:01.881 01:29:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:02.448 Nvme0n1 00:24:02.448 01:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:03.015 Nvme0n1 00:24:03.015 01:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:03.015 01:29:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:04.919 01:29:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:04.919 01:29:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:05.177 01:29:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:05.177 01:29:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:06.209 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:06.209 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:06.209 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.209 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:06.467 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.467 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:06.467 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.467 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:06.725 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.725 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.725 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.725 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.984 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.984 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.984 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.984 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.984 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.984 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.984 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.984 01:29:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:07.242 01:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.242 01:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:07.242 01:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.242 01:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:07.500 01:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:07.500 01:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:07.500 01:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.759 01:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:07.759 01:29:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:08.693 01:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:08.693 01:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:08.693 01:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.693 01:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.951 01:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.951 01:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:08.951 01:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.951 01:29:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:09.209 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.209 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:09.209 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.209 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:09.467 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.467 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:09.467 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.467 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.467 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.467 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.467 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.467 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.724 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.724 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:09.724 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.724 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.982 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.982 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:09.982 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:09.982 01:29:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:10.239 01:29:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:11.174 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:11.174 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:11.174 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.174 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.431 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.431 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:11.431 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.431 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.689 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.689 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.689 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.689 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.947 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.947 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.948 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.948 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.948 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.948 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:11.948 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.948 01:29:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:12.205 01:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.205 01:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:12.205 01:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:12.205 01:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.464 01:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:12.464 01:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:12.464 01:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:12.464 01:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:12.722 01:29:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:13.658 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:13.658 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.658 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.658 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.917 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.917 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:13.917 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.917 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.175 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.175 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.175 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.175 01:29:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.175 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.175 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.175 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.175 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.433 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.433 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:14.433 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.433 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.692 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.692 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:14.692 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.692 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.950 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.950 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:14.950 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:14.950 01:29:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:15.208 01:29:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:16.144 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:16.144 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:16.144 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.144 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.402 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.402 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:16.402 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.402 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.661 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.661 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.661 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.661 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.661 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.661 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.661 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.661 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.920 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.920 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:16.920 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.920 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.178 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.178 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:17.178 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.178 01:29:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.178 01:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.178 01:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:17.178 01:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:17.435 01:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:17.693 01:29:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:18.627 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:18.627 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:18.627 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.627 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.886 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.886 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:18.886 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.886 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.886 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.886 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.886 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:18.886 01:29:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.144 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.144 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.144 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.144 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.402 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.402 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:19.402 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.402 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.660 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.660 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.660 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.660 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.660 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.660 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:19.918 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:19.918 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:20.177 01:29:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:20.435 01:29:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:21.373 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:21.373 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:21.373 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.373 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.632 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.632 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:21.632 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.632 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.632 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.632 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.632 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.632 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.891 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.891 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.891 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.891 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.150 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.150 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.150 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.150 01:29:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.409 01:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.409 01:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:22.409 01:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.409 01:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.409 01:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.409 01:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:22.409 01:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:22.669 01:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:22.928 01:29:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:23.866 01:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:23.866 01:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:23.866 01:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.866 01:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:24.125 01:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.125 01:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:24.125 01:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.125 01:29:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.125 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.125 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.125 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.125 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.384 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.384 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.384 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.384 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.642 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.642 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:24.642 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.642 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:24.643 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.643 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:24.643 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.643 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:24.901 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.901 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:24.901 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:25.160 01:29:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:25.417 01:29:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:26.352 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:26.352 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:26.352 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.352 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:26.610 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.610 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:26.610 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.610 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:26.610 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.610 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:26.610 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.610 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:26.867 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.867 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:26.867 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.867 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:27.123 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.123 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:27.123 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.123 01:29:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:27.379 01:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.379 01:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:27.379 01:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.380 01:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:27.380 01:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.380 01:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:27.380 01:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:27.647 01:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:27.907 01:29:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:28.843 01:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:28.843 01:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:28.843 01:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.843 01:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:29.102 01:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.102 01:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:29.102 01:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.102 01:29:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:29.361 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.361 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:29.361 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.361 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:29.361 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.361 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:29.361 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.361 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:29.619 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.619 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:29.619 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.619 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:29.878 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.878 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:29.878 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.878 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.150 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.150 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3494539 00:24:30.150 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3494539 ']' 00:24:30.150 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3494539 00:24:30.150 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:30.150 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.150 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3494539 00:24:30.150 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:30.150 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:30.150 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3494539' 00:24:30.151 killing process with pid 3494539 00:24:30.151 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3494539 00:24:30.151 01:29:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3494539 00:24:30.151 Connection closed with partial response: 00:24:30.151 00:24:30.151 00:24:30.151 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3494539 00:24:30.151 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:30.151 [2024-07-16 01:29:26.902120] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:24:30.151 [2024-07-16 01:29:26.902172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494539 ] 00:24:30.151 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.151 [2024-07-16 01:29:26.953811] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.151 [2024-07-16 01:29:27.025529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.151 Running I/O for 90 seconds... 00:24:30.151 [2024-07-16 01:29:40.850773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.151 [2024-07-16 01:29:40.850815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.850851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.850860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.850873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.850880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.850893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.850900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.850912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.850919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.850932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.850939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.850951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.850957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.850969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.850977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.850989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.850996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.851979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.851993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.852000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.852013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.852021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.852034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.852040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.852054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.852061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.852075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.852082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.852096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.852103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.852117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.852124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.852139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.852147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.852195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.852204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.852221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.852229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.151 [2024-07-16 01:29:40.852243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.151 [2024-07-16 01:29:40.852251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.852981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.852994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.853003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.853018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.853025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.853049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.853057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.853072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.853080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.853094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.853115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.853129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.853137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.853151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.853158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.853172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.152 [2024-07-16 01:29:40.853180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:30.152 [2024-07-16 01:29:40.853194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.853981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.853989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:30.153 [2024-07-16 01:29:40.854305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.153 [2024-07-16 01:29:40.854312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:40.854329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:40.854343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:40.854362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:40.854369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:40.854387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:40.854394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:40.854413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:40.854421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:40.854439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:40.854446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:40.854464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:40.854471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:40.854489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:40.854496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:40.854514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:40.854521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.154 [2024-07-16 01:29:53.701800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.154 [2024-07-16 01:29:53.701821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.701983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.701997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.702265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.702271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.703213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.154 [2024-07-16 01:29:53.703229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.154 [2024-07-16 01:29:53.703245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.155 [2024-07-16 01:29:53.703714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.155 [2024-07-16 01:29:53.703734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.155 [2024-07-16 01:29:53.703808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.155 [2024-07-16 01:29:53.703828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.155 [2024-07-16 01:29:53.703847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.703860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.703866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.704474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.704490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.704508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.704515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:30.155 [2024-07-16 01:29:53.704528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.155 [2024-07-16 01:29:53.704535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.704766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.704938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.704957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.704977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.704990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.704997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.156 [2024-07-16 01:29:53.705630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.705649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.705669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.705688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.156 [2024-07-16 01:29:53.705707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:30.156 [2024-07-16 01:29:53.705719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.705727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.705739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.705745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.705759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.705766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.705779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.705785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.705798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.705804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.706935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.706947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.706955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.708252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.708268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.708283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.708290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.708303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.708310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.708323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.708329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.708346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.708354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.708367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.157 [2024-07-16 01:29:53.708374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.708386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.708392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.157 [2024-07-16 01:29:53.708404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.157 [2024-07-16 01:29:53.708411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.708882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.708894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.708900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.709912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.709930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.709948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.709956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.709969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.158 [2024-07-16 01:29:53.709976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.709990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.709997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.710009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.710016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.710028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.710034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.710047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.710053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.710065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.710071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.710083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.710090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.710102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.710109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.711313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.711330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.711352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.711359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.711371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.711379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.711394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.711401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.711414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.711420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.711432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.158 [2024-07-16 01:29:53.711439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.158 [2024-07-16 01:29:53.711451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.711536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.711555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.711574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.711613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.711653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.711710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.711730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.711749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.711926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.711983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.711996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.712003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.712016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.712022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.712034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.712041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.712054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.712061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.712073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.159 [2024-07-16 01:29:53.712080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.712092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.712101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.712114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.712122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.712134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.159 [2024-07-16 01:29:53.712140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:30.159 [2024-07-16 01:29:53.712154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.712161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.714427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.714446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.714465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.714543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.714562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.714602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.714621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.714641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.714680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.714692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.714698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.724996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.725006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.725022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.725031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.725046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.725053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.725377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.725390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.725404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.725413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.725425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.725432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.725444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.160 [2024-07-16 01:29:53.725451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.725464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.725471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.725483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.160 [2024-07-16 01:29:53.725490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:30.160 [2024-07-16 01:29:53.725503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.725509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.725528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.725547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.725567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.725609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.725629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.725648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.725667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.725726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.725745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.725935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.725942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.726750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.726773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.726792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.726812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.726831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.726851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.726870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.726892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.726911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.726930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.726949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.726968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.726981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.726989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.727001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.727009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.727021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.161 [2024-07-16 01:29:53.727028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.727041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.727048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.727061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.727067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.728882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.728899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:30.161 [2024-07-16 01:29:53.728914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.161 [2024-07-16 01:29:53.728921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.728937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.728943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.728956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.728963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.728975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.728982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.728995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.729556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.729647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.729654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.730216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.730234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.730249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.730256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.730270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.730277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.730290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.162 [2024-07-16 01:29:53.730297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.730309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.162 [2024-07-16 01:29:53.730317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:30.162 [2024-07-16 01:29:53.730329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.730342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.730362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.730382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.730580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.730599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.730619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.730638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.730657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.730677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.730731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.730738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.731943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.731959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.731973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.731981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.731993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.732001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.732020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.732041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.732060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.732080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.732100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.732120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.732139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.732158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.732180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.732199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.163 [2024-07-16 01:29:53.732220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.163 [2024-07-16 01:29:53.732241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.163 [2024-07-16 01:29:53.732253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.732261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.732465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.732485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.732546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.732565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.732585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.732603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.732664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.732683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.732696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.732703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.164 [2024-07-16 01:29:53.735292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.735313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.735333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.735360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.735380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.735400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.735421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.164 [2024-07-16 01:29:53.735433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.164 [2024-07-16 01:29:53.735440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.735937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.735990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.735998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.736531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.736553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.736573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.736593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.736613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.736633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.736653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.736673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.736692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.736712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.165 [2024-07-16 01:29:53.736731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.736753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.165 [2024-07-16 01:29:53.736775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:30.165 [2024-07-16 01:29:53.736788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.736796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.736809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.736816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.736828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.736835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.736848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.736855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.738507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.738527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.738547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.738567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.738586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.738605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.738724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.738744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.738825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.738883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.738946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.738985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.738998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.166 [2024-07-16 01:29:53.739005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.739018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.739025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.739038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.739044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.739056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.739063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.739076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.739082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.739094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.739101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:30.166 [2024-07-16 01:29:53.739113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.166 [2024-07-16 01:29:53.739122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.739660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.739681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.739702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.739722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.739958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.739967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.741583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.741603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.741684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.741704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.741724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.167 [2024-07-16 01:29:53.741785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.167 [2024-07-16 01:29:53.741797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.167 [2024-07-16 01:29:53.741804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.741817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.741824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.741837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.741844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.741856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.741863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.741876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.741883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.741897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.741903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.741917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.741924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.741939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.741947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.741959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.741966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.741979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.741986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.741998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.742943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.742977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.742985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.744071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.744088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.744103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.744110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.744124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.744132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.744145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.744152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.744165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.744173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.744187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.168 [2024-07-16 01:29:53.744194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.744206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.168 [2024-07-16 01:29:53.744214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:30.168 [2024-07-16 01:29:53.744226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.744792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.744825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.744832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.745702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.745725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.745746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.745766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.745787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.745807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.745828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.745847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.745870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.745891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.169 [2024-07-16 01:29:53.745911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:30.169 [2024-07-16 01:29:53.745924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.169 [2024-07-16 01:29:53.745931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.745945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.745951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.745965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.745972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.746475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.746495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.746576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.746595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.746615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.746634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.746700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.746741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.746753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.746760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.747817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.747835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.747850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.747860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.747873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.747881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.747894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.747902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.747915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.747923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.747937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.747944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.747957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.747968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.747981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.747989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.748002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.748010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.748023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.748030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.748043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.748050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.748063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.170 [2024-07-16 01:29:53.748071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.748084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.748091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.748103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.748110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.748123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.170 [2024-07-16 01:29:53.748130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.170 [2024-07-16 01:29:53.748143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.748323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.748390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.748410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.748450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.748462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.748470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.750182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.750204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.750456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.750535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.750555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.171 [2024-07-16 01:29:53.750574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:30.171 [2024-07-16 01:29:53.750588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.171 [2024-07-16 01:29:53.750595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.750607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.750615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.750628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.750636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.750649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.750656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.750670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.750681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.751695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.751741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.751842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.751980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.751987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.752007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.752739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.752759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.752779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.172 [2024-07-16 01:29:53.752900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.172 [2024-07-16 01:29:53.752923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:30.172 [2024-07-16 01:29:53.752936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.752943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.752956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.752963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.752976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.752983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.752996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.753392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.753413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.753433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.753453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.753473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.753495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.753516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.753536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.753714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.753788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.753796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.755242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.755266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.755287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.755309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.755330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.755356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.755376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.755397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.755419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.755440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.755460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.755480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.173 [2024-07-16 01:29:53.755500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.755520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.173 [2024-07-16 01:29:53.755541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:30.173 [2024-07-16 01:29:53.755554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.755562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.755582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.755602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.755622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.755643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.755664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.755684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.755704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.755724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.755745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.755765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.755786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.755807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.755826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.755840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.755847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.756546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.756569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.756591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.756614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.756635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.756656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.756676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.756696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.756717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.756738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.756758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.756778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.756798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.756818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.756838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.756860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.756880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.756901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.174 [2024-07-16 01:29:53.756922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.756942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.756955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.756962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.757521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.757537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.757561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.757570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.757583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.757591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.757606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.757613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.757627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.757635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.757648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.174 [2024-07-16 01:29:53.757656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:30.174 [2024-07-16 01:29:53.757671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.175 [2024-07-16 01:29:53.757679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.757692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.175 [2024-07-16 01:29:53.757700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.757712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.175 [2024-07-16 01:29:53.757719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.757731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.175 [2024-07-16 01:29:53.757739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.757751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.175 [2024-07-16 01:29:53.757759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.757772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.175 [2024-07-16 01:29:53.757779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.757792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.175 [2024-07-16 01:29:53.757800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.757813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.175 [2024-07-16 01:29:53.757820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.757833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.175 [2024-07-16 01:29:53.757840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.757853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.175 [2024-07-16 01:29:53.757860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.757873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.175 [2024-07-16 01:29:53.757881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.757894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.175 [2024-07-16 01:29:53.757902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.758887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.175 [2024-07-16 01:29:53.758906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.758920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.175 [2024-07-16 01:29:53.758928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.758942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.175 [2024-07-16 01:29:53.758950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.758963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.175 [2024-07-16 01:29:53.758971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.758984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:30.175 [2024-07-16 01:29:53.758992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:30.175 [2024-07-16 01:29:53.759005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.175 [2024-07-16 01:29:53.759013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:30.175 Received shutdown signal, test time was about 27.013291 seconds 00:24:30.175 00:24:30.175 Latency(us) 00:24:30.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.175 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:30.175 Verification LBA range: start 0x0 length 0x4000 00:24:30.175 Nvme0n1 : 27.01 10540.98 41.18 0.00 0.00 12123.92 134.58 3019898.88 00:24:30.175 =================================================================================================================== 00:24:30.175 Total : 10540.98 41.18 0.00 0.00 12123.92 134.58 3019898.88 00:24:30.175 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:30.433 rmmod nvme_tcp 00:24:30.433 rmmod nvme_fabrics 00:24:30.433 rmmod nvme_keyring 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3494263 ']' 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3494263 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3494263 ']' 00:24:30.433 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3494263 00:24:30.434 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:30.434 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:30.434 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3494263 00:24:30.434 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:30.434 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:30.434 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3494263' 00:24:30.434 killing process with pid 3494263 00:24:30.434 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3494263 00:24:30.434 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3494263 00:24:30.692 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:30.692 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:30.692 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:30.692 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.692 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:30.692 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.692 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.692 01:29:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.226 01:29:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:33.226 00:24:33.226 real 0m39.279s 00:24:33.226 user 1m46.182s 00:24:33.226 sys 0m10.423s 00:24:33.226 01:29:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:33.226 01:29:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:33.226 ************************************ 00:24:33.226 END TEST nvmf_host_multipath_status 00:24:33.226 ************************************ 00:24:33.226 01:29:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:33.226 01:29:58 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:33.226 01:29:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:33.226 01:29:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:33.226 01:29:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:33.226 ************************************ 00:24:33.226 START TEST nvmf_discovery_remove_ifc 00:24:33.226 ************************************ 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:33.226 * Looking for test storage... 00:24:33.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.226 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:33.227 01:29:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.493 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.493 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.493 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.493 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:38.494 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:38.494 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:38.494 Found net devices under 0000:86:00.0: cvl_0_0 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:38.494 Found net devices under 0000:86:00.1: cvl_0_1 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.494 01:30:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:38.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:24:38.494 00:24:38.494 --- 10.0.0.2 ping statistics --- 00:24:38.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.494 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:24:38.494 00:24:38.494 --- 10.0.0.1 ping statistics --- 00:24:38.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.494 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3503166 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3503166 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3503166 ']' 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.494 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.495 01:30:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.495 [2024-07-16 01:30:04.259370] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:24:38.495 [2024-07-16 01:30:04.259413] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.495 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.495 [2024-07-16 01:30:04.317777] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.495 [2024-07-16 01:30:04.392352] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.495 [2024-07-16 01:30:04.392391] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.495 [2024-07-16 01:30:04.392398] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.495 [2024-07-16 01:30:04.392403] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.495 [2024-07-16 01:30:04.392408] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.495 [2024-07-16 01:30:04.392426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.061 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.061 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:39.061 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:39.061 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:39.061 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.320 [2024-07-16 01:30:05.089165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.320 [2024-07-16 01:30:05.097317] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:39.320 null0 00:24:39.320 [2024-07-16 01:30:05.129291] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3503368 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3503368 /tmp/host.sock 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3503368 ']' 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:39.320 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:39.320 01:30:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.320 [2024-07-16 01:30:05.196737] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:24:39.320 [2024-07-16 01:30:05.196780] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503368 ] 00:24:39.320 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.320 [2024-07-16 01:30:05.250761] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.579 [2024-07-16 01:30:05.331745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.146 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:40.146 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:40.146 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.146 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:40.146 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.147 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.147 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.147 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:40.147 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.147 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.147 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.147 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:40.147 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.147 01:30:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.523 [2024-07-16 01:30:07.106955] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:41.523 [2024-07-16 01:30:07.106974] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:41.523 [2024-07-16 01:30:07.106986] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:41.523 [2024-07-16 01:30:07.236391] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:41.524 [2024-07-16 01:30:07.339190] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:41.524 [2024-07-16 01:30:07.339233] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:41.524 [2024-07-16 01:30:07.339254] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:41.524 [2024-07-16 01:30:07.339265] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:41.524 [2024-07-16 01:30:07.339283] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.524 [2024-07-16 01:30:07.346043] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1469ec0 was disconnected and freed. delete nvme_qpair. 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.524 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.782 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.782 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:41.782 01:30:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:42.718 01:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:42.718 01:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:42.718 01:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.718 01:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:42.718 01:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.718 01:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:42.718 01:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:42.718 01:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.718 01:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:42.718 01:30:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.654 01:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.654 01:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.654 01:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.654 01:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.654 01:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.654 01:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.654 01:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.654 01:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.912 01:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:43.912 01:30:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.849 01:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.849 01:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.849 01:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.849 01:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.849 01:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.849 01:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.849 01:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.849 01:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.849 01:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:44.849 01:30:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:45.786 01:30:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.786 01:30:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.786 01:30:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.786 01:30:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.786 01:30:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.786 01:30:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.786 01:30:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.786 01:30:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.786 01:30:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:45.786 01:30:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.163 01:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.163 01:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.163 01:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.163 01:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.163 01:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.163 01:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.163 01:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.163 [2024-07-16 01:30:12.780808] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:47.163 [2024-07-16 01:30:12.780853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.163 [2024-07-16 01:30:12.780864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.163 [2024-07-16 01:30:12.780874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.163 [2024-07-16 01:30:12.780882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.163 [2024-07-16 01:30:12.780889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.163 [2024-07-16 01:30:12.780896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.163 [2024-07-16 01:30:12.780904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.163 [2024-07-16 01:30:12.780912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.163 [2024-07-16 01:30:12.780919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.163 [2024-07-16 01:30:12.780926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.163 [2024-07-16 01:30:12.780932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430810 is same with the state(5) to be set 00:24:47.163 01:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.163 [2024-07-16 01:30:12.790830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430810 (9): Bad file descriptor 00:24:47.163 [2024-07-16 01:30:12.800869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:47.163 01:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:47.163 01:30:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:48.097 01:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:48.097 01:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.097 01:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:48.097 01:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:48.097 01:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.097 01:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.097 01:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:48.097 [2024-07-16 01:30:13.841363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:48.097 [2024-07-16 01:30:13.841404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430810 with addr=10.0.0.2, port=4420 00:24:48.097 [2024-07-16 01:30:13.841424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430810 is same with the state(5) to be set 00:24:48.097 [2024-07-16 01:30:13.841450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430810 (9): Bad file descriptor 00:24:48.097 [2024-07-16 01:30:13.841885] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:48.097 [2024-07-16 01:30:13.841906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:48.097 [2024-07-16 01:30:13.841916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:48.097 [2024-07-16 01:30:13.841928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:48.097 [2024-07-16 01:30:13.841946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:48.097 [2024-07-16 01:30:13.841957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:48.097 01:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.097 01:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:48.097 01:30:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.031 [2024-07-16 01:30:14.844430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:49.031 [2024-07-16 01:30:14.844451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:49.031 [2024-07-16 01:30:14.844458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:49.031 [2024-07-16 01:30:14.844464] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:49.031 [2024-07-16 01:30:14.844475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:49.031 [2024-07-16 01:30:14.844493] bdev_nvme.c:6742:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:49.031 [2024-07-16 01:30:14.844510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.031 [2024-07-16 01:30:14.844518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.031 [2024-07-16 01:30:14.844527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.031 [2024-07-16 01:30:14.844533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.031 [2024-07-16 01:30:14.844540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.031 [2024-07-16 01:30:14.844547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.031 [2024-07-16 01:30:14.844553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.031 [2024-07-16 01:30:14.844559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.031 [2024-07-16 01:30:14.844566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:49.031 [2024-07-16 01:30:14.844572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:49.031 [2024-07-16 01:30:14.844579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:49.031 [2024-07-16 01:30:14.844683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142fc90 (9): Bad file descriptor 00:24:49.031 [2024-07-16 01:30:14.845693] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:49.031 [2024-07-16 01:30:14.845702] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:49.031 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.031 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.031 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.031 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.031 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.031 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.031 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.031 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.031 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:49.031 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.032 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.032 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:49.032 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.032 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.032 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.032 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.032 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.032 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.032 01:30:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.032 01:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.290 01:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:49.290 01:30:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:50.224 01:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:50.224 01:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.224 01:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:50.224 01:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.224 01:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:50.224 01:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.224 01:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:50.224 01:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.224 01:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:50.224 01:30:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:51.158 [2024-07-16 01:30:16.902494] bdev_nvme.c:6991:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:51.158 [2024-07-16 01:30:16.902510] bdev_nvme.c:7071:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:51.158 [2024-07-16 01:30:16.902523] bdev_nvme.c:6954:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:51.158 [2024-07-16 01:30:16.988789] bdev_nvme.c:6920:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:51.158 [2024-07-16 01:30:17.086074] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:51.158 [2024-07-16 01:30:17.086109] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:51.158 [2024-07-16 01:30:17.086126] bdev_nvme.c:7781:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:51.158 [2024-07-16 01:30:17.086139] bdev_nvme.c:6810:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:51.158 [2024-07-16 01:30:17.086146] bdev_nvme.c:6769:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:51.158 [2024-07-16 01:30:17.091088] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x141ebc0 was disconnected and freed. delete nvme_qpair. 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3503368 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3503368 ']' 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3503368 00:24:51.158 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3503368 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3503368' 00:24:51.417 killing process with pid 3503368 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3503368 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3503368 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:51.417 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:51.417 rmmod nvme_tcp 00:24:51.417 rmmod nvme_fabrics 00:24:51.417 rmmod nvme_keyring 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3503166 ']' 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3503166 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3503166 ']' 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3503166 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3503166 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3503166' 00:24:51.676 killing process with pid 3503166 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3503166 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3503166 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.676 01:30:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.242 01:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:54.242 00:24:54.242 real 0m20.968s 00:24:54.242 user 0m26.540s 00:24:54.242 sys 0m5.314s 00:24:54.242 01:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:54.242 01:30:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.242 ************************************ 00:24:54.242 END TEST nvmf_discovery_remove_ifc 00:24:54.242 ************************************ 00:24:54.242 01:30:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:54.242 01:30:19 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:54.242 01:30:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:54.242 01:30:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:54.242 01:30:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.242 ************************************ 00:24:54.242 START TEST nvmf_identify_kernel_target 00:24:54.242 ************************************ 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:54.242 * Looking for test storage... 00:24:54.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:54.242 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:54.243 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.243 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.243 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.243 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:54.243 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:54.243 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:54.243 01:30:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:59.508 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:59.508 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:59.508 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:59.509 Found net devices under 0000:86:00.0: cvl_0_0 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:59.509 Found net devices under 0000:86:00.1: cvl_0_1 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:59.509 01:30:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:59.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:24:59.509 00:24:59.509 --- 10.0.0.2 ping statistics --- 00:24:59.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.509 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:24:59.509 00:24:59.509 --- 10.0.0.1 ping statistics --- 00:24:59.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.509 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:59.509 01:30:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:01.400 Waiting for block devices as requested 00:25:01.400 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:01.658 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:01.658 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:01.658 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:01.916 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:01.916 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:01.916 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:01.916 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:02.173 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:02.173 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:02.173 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:02.173 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:02.432 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:02.432 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:02.432 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:02.690 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:02.690 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:02.690 No valid GPT data, bailing 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:02.690 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:02.949 00:25:02.949 Discovery Log Number of Records 2, Generation counter 2 00:25:02.949 =====Discovery Log Entry 0====== 00:25:02.949 trtype: tcp 00:25:02.949 adrfam: ipv4 00:25:02.949 subtype: current discovery subsystem 00:25:02.949 treq: not specified, sq flow control disable supported 00:25:02.949 portid: 1 00:25:02.949 trsvcid: 4420 00:25:02.949 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:02.949 traddr: 10.0.0.1 00:25:02.949 eflags: none 00:25:02.949 sectype: none 00:25:02.949 =====Discovery Log Entry 1====== 00:25:02.949 trtype: tcp 00:25:02.949 adrfam: ipv4 00:25:02.949 subtype: nvme subsystem 00:25:02.949 treq: not specified, sq flow control disable supported 00:25:02.949 portid: 1 00:25:02.949 trsvcid: 4420 00:25:02.949 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:02.949 traddr: 10.0.0.1 00:25:02.949 eflags: none 00:25:02.949 sectype: none 00:25:02.949 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:02.949 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:02.949 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.949 ===================================================== 00:25:02.949 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:02.949 ===================================================== 00:25:02.949 Controller Capabilities/Features 00:25:02.949 ================================ 00:25:02.949 Vendor ID: 0000 00:25:02.949 Subsystem Vendor ID: 0000 00:25:02.949 Serial Number: e5ffb6d3d6fe97262adb 00:25:02.949 Model Number: Linux 00:25:02.949 Firmware Version: 6.7.0-68 00:25:02.949 Recommended Arb Burst: 0 00:25:02.949 IEEE OUI Identifier: 00 00 00 00:25:02.949 Multi-path I/O 00:25:02.949 May have multiple subsystem ports: No 00:25:02.949 May have multiple controllers: No 00:25:02.949 Associated with SR-IOV VF: No 00:25:02.949 Max Data Transfer Size: Unlimited 00:25:02.949 Max Number of Namespaces: 0 00:25:02.949 Max Number of I/O Queues: 1024 00:25:02.949 NVMe Specification Version (VS): 1.3 00:25:02.949 NVMe Specification Version (Identify): 1.3 00:25:02.949 Maximum Queue Entries: 1024 00:25:02.949 Contiguous Queues Required: No 00:25:02.949 Arbitration Mechanisms Supported 00:25:02.949 Weighted Round Robin: Not Supported 00:25:02.949 Vendor Specific: Not Supported 00:25:02.949 Reset Timeout: 7500 ms 00:25:02.949 Doorbell Stride: 4 bytes 00:25:02.949 NVM Subsystem Reset: Not Supported 00:25:02.949 Command Sets Supported 00:25:02.949 NVM Command Set: Supported 00:25:02.949 Boot Partition: Not Supported 00:25:02.949 Memory Page Size Minimum: 4096 bytes 00:25:02.949 Memory Page Size Maximum: 4096 bytes 00:25:02.949 Persistent Memory Region: Not Supported 00:25:02.949 Optional Asynchronous Events Supported 00:25:02.949 Namespace Attribute Notices: Not Supported 00:25:02.949 Firmware Activation Notices: Not Supported 00:25:02.949 ANA Change Notices: Not Supported 00:25:02.949 PLE Aggregate Log Change Notices: Not Supported 00:25:02.949 LBA Status Info Alert Notices: Not Supported 00:25:02.949 EGE Aggregate Log Change Notices: Not Supported 00:25:02.949 Normal NVM Subsystem Shutdown event: Not Supported 00:25:02.949 Zone Descriptor Change Notices: Not Supported 00:25:02.949 Discovery Log Change Notices: Supported 00:25:02.949 Controller Attributes 00:25:02.949 128-bit Host Identifier: Not Supported 00:25:02.949 Non-Operational Permissive Mode: Not Supported 00:25:02.949 NVM Sets: Not Supported 00:25:02.950 Read Recovery Levels: Not Supported 00:25:02.950 Endurance Groups: Not Supported 00:25:02.950 Predictable Latency Mode: Not Supported 00:25:02.950 Traffic Based Keep ALive: Not Supported 00:25:02.950 Namespace Granularity: Not Supported 00:25:02.950 SQ Associations: Not Supported 00:25:02.950 UUID List: Not Supported 00:25:02.950 Multi-Domain Subsystem: Not Supported 00:25:02.950 Fixed Capacity Management: Not Supported 00:25:02.950 Variable Capacity Management: Not Supported 00:25:02.950 Delete Endurance Group: Not Supported 00:25:02.950 Delete NVM Set: Not Supported 00:25:02.950 Extended LBA Formats Supported: Not Supported 00:25:02.950 Flexible Data Placement Supported: Not Supported 00:25:02.950 00:25:02.950 Controller Memory Buffer Support 00:25:02.950 ================================ 00:25:02.950 Supported: No 00:25:02.950 00:25:02.950 Persistent Memory Region Support 00:25:02.950 ================================ 00:25:02.950 Supported: No 00:25:02.950 00:25:02.950 Admin Command Set Attributes 00:25:02.950 ============================ 00:25:02.950 Security Send/Receive: Not Supported 00:25:02.950 Format NVM: Not Supported 00:25:02.950 Firmware Activate/Download: Not Supported 00:25:02.950 Namespace Management: Not Supported 00:25:02.950 Device Self-Test: Not Supported 00:25:02.950 Directives: Not Supported 00:25:02.950 NVMe-MI: Not Supported 00:25:02.950 Virtualization Management: Not Supported 00:25:02.950 Doorbell Buffer Config: Not Supported 00:25:02.950 Get LBA Status Capability: Not Supported 00:25:02.950 Command & Feature Lockdown Capability: Not Supported 00:25:02.950 Abort Command Limit: 1 00:25:02.950 Async Event Request Limit: 1 00:25:02.950 Number of Firmware Slots: N/A 00:25:02.950 Firmware Slot 1 Read-Only: N/A 00:25:02.950 Firmware Activation Without Reset: N/A 00:25:02.950 Multiple Update Detection Support: N/A 00:25:02.950 Firmware Update Granularity: No Information Provided 00:25:02.950 Per-Namespace SMART Log: No 00:25:02.950 Asymmetric Namespace Access Log Page: Not Supported 00:25:02.950 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:02.950 Command Effects Log Page: Not Supported 00:25:02.950 Get Log Page Extended Data: Supported 00:25:02.950 Telemetry Log Pages: Not Supported 00:25:02.950 Persistent Event Log Pages: Not Supported 00:25:02.950 Supported Log Pages Log Page: May Support 00:25:02.950 Commands Supported & Effects Log Page: Not Supported 00:25:02.950 Feature Identifiers & Effects Log Page:May Support 00:25:02.950 NVMe-MI Commands & Effects Log Page: May Support 00:25:02.950 Data Area 4 for Telemetry Log: Not Supported 00:25:02.950 Error Log Page Entries Supported: 1 00:25:02.950 Keep Alive: Not Supported 00:25:02.950 00:25:02.950 NVM Command Set Attributes 00:25:02.950 ========================== 00:25:02.950 Submission Queue Entry Size 00:25:02.950 Max: 1 00:25:02.950 Min: 1 00:25:02.950 Completion Queue Entry Size 00:25:02.950 Max: 1 00:25:02.950 Min: 1 00:25:02.950 Number of Namespaces: 0 00:25:02.950 Compare Command: Not Supported 00:25:02.950 Write Uncorrectable Command: Not Supported 00:25:02.950 Dataset Management Command: Not Supported 00:25:02.950 Write Zeroes Command: Not Supported 00:25:02.950 Set Features Save Field: Not Supported 00:25:02.950 Reservations: Not Supported 00:25:02.950 Timestamp: Not Supported 00:25:02.950 Copy: Not Supported 00:25:02.950 Volatile Write Cache: Not Present 00:25:02.950 Atomic Write Unit (Normal): 1 00:25:02.950 Atomic Write Unit (PFail): 1 00:25:02.950 Atomic Compare & Write Unit: 1 00:25:02.950 Fused Compare & Write: Not Supported 00:25:02.950 Scatter-Gather List 00:25:02.950 SGL Command Set: Supported 00:25:02.950 SGL Keyed: Not Supported 00:25:02.950 SGL Bit Bucket Descriptor: Not Supported 00:25:02.950 SGL Metadata Pointer: Not Supported 00:25:02.950 Oversized SGL: Not Supported 00:25:02.950 SGL Metadata Address: Not Supported 00:25:02.950 SGL Offset: Supported 00:25:02.950 Transport SGL Data Block: Not Supported 00:25:02.950 Replay Protected Memory Block: Not Supported 00:25:02.950 00:25:02.950 Firmware Slot Information 00:25:02.950 ========================= 00:25:02.950 Active slot: 0 00:25:02.950 00:25:02.950 00:25:02.950 Error Log 00:25:02.950 ========= 00:25:02.950 00:25:02.950 Active Namespaces 00:25:02.950 ================= 00:25:02.950 Discovery Log Page 00:25:02.950 ================== 00:25:02.950 Generation Counter: 2 00:25:02.950 Number of Records: 2 00:25:02.950 Record Format: 0 00:25:02.950 00:25:02.950 Discovery Log Entry 0 00:25:02.950 ---------------------- 00:25:02.950 Transport Type: 3 (TCP) 00:25:02.950 Address Family: 1 (IPv4) 00:25:02.950 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:02.950 Entry Flags: 00:25:02.950 Duplicate Returned Information: 0 00:25:02.950 Explicit Persistent Connection Support for Discovery: 0 00:25:02.950 Transport Requirements: 00:25:02.950 Secure Channel: Not Specified 00:25:02.950 Port ID: 1 (0x0001) 00:25:02.950 Controller ID: 65535 (0xffff) 00:25:02.950 Admin Max SQ Size: 32 00:25:02.950 Transport Service Identifier: 4420 00:25:02.950 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:02.950 Transport Address: 10.0.0.1 00:25:02.950 Discovery Log Entry 1 00:25:02.950 ---------------------- 00:25:02.950 Transport Type: 3 (TCP) 00:25:02.950 Address Family: 1 (IPv4) 00:25:02.950 Subsystem Type: 2 (NVM Subsystem) 00:25:02.950 Entry Flags: 00:25:02.950 Duplicate Returned Information: 0 00:25:02.950 Explicit Persistent Connection Support for Discovery: 0 00:25:02.950 Transport Requirements: 00:25:02.950 Secure Channel: Not Specified 00:25:02.950 Port ID: 1 (0x0001) 00:25:02.950 Controller ID: 65535 (0xffff) 00:25:02.950 Admin Max SQ Size: 32 00:25:02.950 Transport Service Identifier: 4420 00:25:02.950 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:02.950 Transport Address: 10.0.0.1 00:25:02.950 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:02.950 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.950 get_feature(0x01) failed 00:25:02.950 get_feature(0x02) failed 00:25:02.950 get_feature(0x04) failed 00:25:02.950 ===================================================== 00:25:02.950 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:02.950 ===================================================== 00:25:02.950 Controller Capabilities/Features 00:25:02.950 ================================ 00:25:02.950 Vendor ID: 0000 00:25:02.950 Subsystem Vendor ID: 0000 00:25:02.950 Serial Number: ee11198776d265ffd06c 00:25:02.950 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:02.950 Firmware Version: 6.7.0-68 00:25:02.950 Recommended Arb Burst: 6 00:25:02.950 IEEE OUI Identifier: 00 00 00 00:25:02.950 Multi-path I/O 00:25:02.950 May have multiple subsystem ports: Yes 00:25:02.950 May have multiple controllers: Yes 00:25:02.950 Associated with SR-IOV VF: No 00:25:02.950 Max Data Transfer Size: Unlimited 00:25:02.950 Max Number of Namespaces: 1024 00:25:02.950 Max Number of I/O Queues: 128 00:25:02.950 NVMe Specification Version (VS): 1.3 00:25:02.950 NVMe Specification Version (Identify): 1.3 00:25:02.950 Maximum Queue Entries: 1024 00:25:02.950 Contiguous Queues Required: No 00:25:02.950 Arbitration Mechanisms Supported 00:25:02.950 Weighted Round Robin: Not Supported 00:25:02.950 Vendor Specific: Not Supported 00:25:02.950 Reset Timeout: 7500 ms 00:25:02.950 Doorbell Stride: 4 bytes 00:25:02.950 NVM Subsystem Reset: Not Supported 00:25:02.950 Command Sets Supported 00:25:02.950 NVM Command Set: Supported 00:25:02.950 Boot Partition: Not Supported 00:25:02.950 Memory Page Size Minimum: 4096 bytes 00:25:02.950 Memory Page Size Maximum: 4096 bytes 00:25:02.950 Persistent Memory Region: Not Supported 00:25:02.950 Optional Asynchronous Events Supported 00:25:02.950 Namespace Attribute Notices: Supported 00:25:02.950 Firmware Activation Notices: Not Supported 00:25:02.950 ANA Change Notices: Supported 00:25:02.950 PLE Aggregate Log Change Notices: Not Supported 00:25:02.950 LBA Status Info Alert Notices: Not Supported 00:25:02.950 EGE Aggregate Log Change Notices: Not Supported 00:25:02.950 Normal NVM Subsystem Shutdown event: Not Supported 00:25:02.950 Zone Descriptor Change Notices: Not Supported 00:25:02.950 Discovery Log Change Notices: Not Supported 00:25:02.950 Controller Attributes 00:25:02.950 128-bit Host Identifier: Supported 00:25:02.950 Non-Operational Permissive Mode: Not Supported 00:25:02.950 NVM Sets: Not Supported 00:25:02.950 Read Recovery Levels: Not Supported 00:25:02.950 Endurance Groups: Not Supported 00:25:02.950 Predictable Latency Mode: Not Supported 00:25:02.950 Traffic Based Keep ALive: Supported 00:25:02.950 Namespace Granularity: Not Supported 00:25:02.950 SQ Associations: Not Supported 00:25:02.950 UUID List: Not Supported 00:25:02.950 Multi-Domain Subsystem: Not Supported 00:25:02.950 Fixed Capacity Management: Not Supported 00:25:02.950 Variable Capacity Management: Not Supported 00:25:02.950 Delete Endurance Group: Not Supported 00:25:02.950 Delete NVM Set: Not Supported 00:25:02.950 Extended LBA Formats Supported: Not Supported 00:25:02.950 Flexible Data Placement Supported: Not Supported 00:25:02.950 00:25:02.950 Controller Memory Buffer Support 00:25:02.951 ================================ 00:25:02.951 Supported: No 00:25:02.951 00:25:02.951 Persistent Memory Region Support 00:25:02.951 ================================ 00:25:02.951 Supported: No 00:25:02.951 00:25:02.951 Admin Command Set Attributes 00:25:02.951 ============================ 00:25:02.951 Security Send/Receive: Not Supported 00:25:02.951 Format NVM: Not Supported 00:25:02.951 Firmware Activate/Download: Not Supported 00:25:02.951 Namespace Management: Not Supported 00:25:02.951 Device Self-Test: Not Supported 00:25:02.951 Directives: Not Supported 00:25:02.951 NVMe-MI: Not Supported 00:25:02.951 Virtualization Management: Not Supported 00:25:02.951 Doorbell Buffer Config: Not Supported 00:25:02.951 Get LBA Status Capability: Not Supported 00:25:02.951 Command & Feature Lockdown Capability: Not Supported 00:25:02.951 Abort Command Limit: 4 00:25:02.951 Async Event Request Limit: 4 00:25:02.951 Number of Firmware Slots: N/A 00:25:02.951 Firmware Slot 1 Read-Only: N/A 00:25:02.951 Firmware Activation Without Reset: N/A 00:25:02.951 Multiple Update Detection Support: N/A 00:25:02.951 Firmware Update Granularity: No Information Provided 00:25:02.951 Per-Namespace SMART Log: Yes 00:25:02.951 Asymmetric Namespace Access Log Page: Supported 00:25:02.951 ANA Transition Time : 10 sec 00:25:02.951 00:25:02.951 Asymmetric Namespace Access Capabilities 00:25:02.951 ANA Optimized State : Supported 00:25:02.951 ANA Non-Optimized State : Supported 00:25:02.951 ANA Inaccessible State : Supported 00:25:02.951 ANA Persistent Loss State : Supported 00:25:02.951 ANA Change State : Supported 00:25:02.951 ANAGRPID is not changed : No 00:25:02.951 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:02.951 00:25:02.951 ANA Group Identifier Maximum : 128 00:25:02.951 Number of ANA Group Identifiers : 128 00:25:02.951 Max Number of Allowed Namespaces : 1024 00:25:02.951 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:02.951 Command Effects Log Page: Supported 00:25:02.951 Get Log Page Extended Data: Supported 00:25:02.951 Telemetry Log Pages: Not Supported 00:25:02.951 Persistent Event Log Pages: Not Supported 00:25:02.951 Supported Log Pages Log Page: May Support 00:25:02.951 Commands Supported & Effects Log Page: Not Supported 00:25:02.951 Feature Identifiers & Effects Log Page:May Support 00:25:02.951 NVMe-MI Commands & Effects Log Page: May Support 00:25:02.951 Data Area 4 for Telemetry Log: Not Supported 00:25:02.951 Error Log Page Entries Supported: 128 00:25:02.951 Keep Alive: Supported 00:25:02.951 Keep Alive Granularity: 1000 ms 00:25:02.951 00:25:02.951 NVM Command Set Attributes 00:25:02.951 ========================== 00:25:02.951 Submission Queue Entry Size 00:25:02.951 Max: 64 00:25:02.951 Min: 64 00:25:02.951 Completion Queue Entry Size 00:25:02.951 Max: 16 00:25:02.951 Min: 16 00:25:02.951 Number of Namespaces: 1024 00:25:02.951 Compare Command: Not Supported 00:25:02.951 Write Uncorrectable Command: Not Supported 00:25:02.951 Dataset Management Command: Supported 00:25:02.951 Write Zeroes Command: Supported 00:25:02.951 Set Features Save Field: Not Supported 00:25:02.951 Reservations: Not Supported 00:25:02.951 Timestamp: Not Supported 00:25:02.951 Copy: Not Supported 00:25:02.951 Volatile Write Cache: Present 00:25:02.951 Atomic Write Unit (Normal): 1 00:25:02.951 Atomic Write Unit (PFail): 1 00:25:02.951 Atomic Compare & Write Unit: 1 00:25:02.951 Fused Compare & Write: Not Supported 00:25:02.951 Scatter-Gather List 00:25:02.951 SGL Command Set: Supported 00:25:02.951 SGL Keyed: Not Supported 00:25:02.951 SGL Bit Bucket Descriptor: Not Supported 00:25:02.951 SGL Metadata Pointer: Not Supported 00:25:02.951 Oversized SGL: Not Supported 00:25:02.951 SGL Metadata Address: Not Supported 00:25:02.951 SGL Offset: Supported 00:25:02.951 Transport SGL Data Block: Not Supported 00:25:02.951 Replay Protected Memory Block: Not Supported 00:25:02.951 00:25:02.951 Firmware Slot Information 00:25:02.951 ========================= 00:25:02.951 Active slot: 0 00:25:02.951 00:25:02.951 Asymmetric Namespace Access 00:25:02.951 =========================== 00:25:02.951 Change Count : 0 00:25:02.951 Number of ANA Group Descriptors : 1 00:25:02.951 ANA Group Descriptor : 0 00:25:02.951 ANA Group ID : 1 00:25:02.951 Number of NSID Values : 1 00:25:02.951 Change Count : 0 00:25:02.951 ANA State : 1 00:25:02.951 Namespace Identifier : 1 00:25:02.951 00:25:02.951 Commands Supported and Effects 00:25:02.951 ============================== 00:25:02.951 Admin Commands 00:25:02.951 -------------- 00:25:02.951 Get Log Page (02h): Supported 00:25:02.951 Identify (06h): Supported 00:25:02.951 Abort (08h): Supported 00:25:02.951 Set Features (09h): Supported 00:25:02.951 Get Features (0Ah): Supported 00:25:02.951 Asynchronous Event Request (0Ch): Supported 00:25:02.951 Keep Alive (18h): Supported 00:25:02.951 I/O Commands 00:25:02.951 ------------ 00:25:02.951 Flush (00h): Supported 00:25:02.951 Write (01h): Supported LBA-Change 00:25:02.951 Read (02h): Supported 00:25:02.951 Write Zeroes (08h): Supported LBA-Change 00:25:02.951 Dataset Management (09h): Supported 00:25:02.951 00:25:02.951 Error Log 00:25:02.951 ========= 00:25:02.951 Entry: 0 00:25:02.951 Error Count: 0x3 00:25:02.951 Submission Queue Id: 0x0 00:25:02.951 Command Id: 0x5 00:25:02.951 Phase Bit: 0 00:25:02.951 Status Code: 0x2 00:25:02.951 Status Code Type: 0x0 00:25:02.951 Do Not Retry: 1 00:25:02.951 Error Location: 0x28 00:25:02.951 LBA: 0x0 00:25:02.951 Namespace: 0x0 00:25:02.951 Vendor Log Page: 0x0 00:25:02.951 ----------- 00:25:02.951 Entry: 1 00:25:02.951 Error Count: 0x2 00:25:02.951 Submission Queue Id: 0x0 00:25:02.951 Command Id: 0x5 00:25:02.951 Phase Bit: 0 00:25:02.951 Status Code: 0x2 00:25:02.951 Status Code Type: 0x0 00:25:02.951 Do Not Retry: 1 00:25:02.951 Error Location: 0x28 00:25:02.951 LBA: 0x0 00:25:02.951 Namespace: 0x0 00:25:02.951 Vendor Log Page: 0x0 00:25:02.951 ----------- 00:25:02.951 Entry: 2 00:25:02.951 Error Count: 0x1 00:25:02.951 Submission Queue Id: 0x0 00:25:02.951 Command Id: 0x4 00:25:02.951 Phase Bit: 0 00:25:02.951 Status Code: 0x2 00:25:02.951 Status Code Type: 0x0 00:25:02.951 Do Not Retry: 1 00:25:02.951 Error Location: 0x28 00:25:02.951 LBA: 0x0 00:25:02.951 Namespace: 0x0 00:25:02.951 Vendor Log Page: 0x0 00:25:02.951 00:25:02.951 Number of Queues 00:25:02.951 ================ 00:25:02.951 Number of I/O Submission Queues: 128 00:25:02.951 Number of I/O Completion Queues: 128 00:25:02.951 00:25:02.951 ZNS Specific Controller Data 00:25:02.951 ============================ 00:25:02.951 Zone Append Size Limit: 0 00:25:02.951 00:25:02.951 00:25:02.951 Active Namespaces 00:25:02.951 ================= 00:25:02.951 get_feature(0x05) failed 00:25:02.951 Namespace ID:1 00:25:02.951 Command Set Identifier: NVM (00h) 00:25:02.951 Deallocate: Supported 00:25:02.951 Deallocated/Unwritten Error: Not Supported 00:25:02.951 Deallocated Read Value: Unknown 00:25:02.951 Deallocate in Write Zeroes: Not Supported 00:25:02.951 Deallocated Guard Field: 0xFFFF 00:25:02.951 Flush: Supported 00:25:02.951 Reservation: Not Supported 00:25:02.951 Namespace Sharing Capabilities: Multiple Controllers 00:25:02.951 Size (in LBAs): 3125627568 (1490GiB) 00:25:02.951 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:02.951 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:02.951 UUID: e604daf3-839a-4003-a963-d605558cf077 00:25:02.951 Thin Provisioning: Not Supported 00:25:02.951 Per-NS Atomic Units: Yes 00:25:02.951 Atomic Boundary Size (Normal): 0 00:25:02.951 Atomic Boundary Size (PFail): 0 00:25:02.951 Atomic Boundary Offset: 0 00:25:02.951 NGUID/EUI64 Never Reused: No 00:25:02.951 ANA group ID: 1 00:25:02.951 Namespace Write Protected: No 00:25:02.951 Number of LBA Formats: 1 00:25:02.951 Current LBA Format: LBA Format #00 00:25:02.951 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:02.951 00:25:02.951 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:02.951 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:02.951 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:02.951 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:02.951 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:02.951 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:02.951 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:02.951 rmmod nvme_tcp 00:25:03.210 rmmod nvme_fabrics 00:25:03.210 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.210 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:03.210 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:03.210 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:03.210 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:03.211 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:03.211 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:03.211 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.211 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:03.211 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.211 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.211 01:30:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.113 01:30:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:05.113 01:30:31 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:05.113 01:30:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:05.113 01:30:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:05.113 01:30:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:05.113 01:30:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:05.113 01:30:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:05.113 01:30:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:05.113 01:30:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:05.113 01:30:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:05.371 01:30:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:07.903 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:07.903 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:09.278 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:09.536 00:25:09.536 real 0m15.509s 00:25:09.536 user 0m3.521s 00:25:09.536 sys 0m7.688s 00:25:09.536 01:30:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:09.536 01:30:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.536 ************************************ 00:25:09.536 END TEST nvmf_identify_kernel_target 00:25:09.536 ************************************ 00:25:09.536 01:30:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:09.536 01:30:35 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:09.536 01:30:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:09.536 01:30:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:09.536 01:30:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:09.536 ************************************ 00:25:09.536 START TEST nvmf_auth_host 00:25:09.536 ************************************ 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:09.536 * Looking for test storage... 00:25:09.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:09.536 01:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:09.537 01:30:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:16.098 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:16.098 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:16.098 Found net devices under 0000:86:00.0: cvl_0_0 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:16.098 Found net devices under 0000:86:00.1: cvl_0_1 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:16.098 01:30:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.098 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.098 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.098 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:16.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:25:16.098 00:25:16.098 --- 10.0.0.2 ping statistics --- 00:25:16.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.098 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:25:16.098 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:25:16.099 00:25:16.099 --- 10.0.0.1 ping statistics --- 00:25:16.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.099 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3515497 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3515497 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3515497 ']' 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:16.099 01:30:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=80c89e75b2c7e9208b6c7597f175204d 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ck5 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 80c89e75b2c7e9208b6c7597f175204d 0 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 80c89e75b2c7e9208b6c7597f175204d 0 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=80c89e75b2c7e9208b6c7597f175204d 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ck5 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ck5 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ck5 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9f21493280e14bfbc6bfd5f1c811c5316b547ae40bdb012c6f462420420982f4 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qfS 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9f21493280e14bfbc6bfd5f1c811c5316b547ae40bdb012c6f462420420982f4 3 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9f21493280e14bfbc6bfd5f1c811c5316b547ae40bdb012c6f462420420982f4 3 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9f21493280e14bfbc6bfd5f1c811c5316b547ae40bdb012c6f462420420982f4 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:16.099 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.357 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qfS 00:25:16.357 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qfS 00:25:16.357 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.qfS 00:25:16.357 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:16.357 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.357 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.357 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.357 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8c467bcc0456aeb60c70da2cb69a5c95ce5da77f9c4a541a 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.qiO 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8c467bcc0456aeb60c70da2cb69a5c95ce5da77f9c4a541a 0 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8c467bcc0456aeb60c70da2cb69a5c95ce5da77f9c4a541a 0 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8c467bcc0456aeb60c70da2cb69a5c95ce5da77f9c4a541a 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.qiO 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.qiO 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.qiO 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=002207dd36ed01b90749ace9f0de708991342bdfe9438db1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Og6 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 002207dd36ed01b90749ace9f0de708991342bdfe9438db1 2 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 002207dd36ed01b90749ace9f0de708991342bdfe9438db1 2 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=002207dd36ed01b90749ace9f0de708991342bdfe9438db1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Og6 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Og6 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Og6 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f2cf237629a18f860b792adc2dc554fd 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.aow 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f2cf237629a18f860b792adc2dc554fd 1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f2cf237629a18f860b792adc2dc554fd 1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f2cf237629a18f860b792adc2dc554fd 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.aow 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.aow 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.aow 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6e38536fd7a043b8fe8db7cb59c397f9 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.qPR 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6e38536fd7a043b8fe8db7cb59c397f9 1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6e38536fd7a043b8fe8db7cb59c397f9 1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6e38536fd7a043b8fe8db7cb59c397f9 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.qPR 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.qPR 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.qPR 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:16.358 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bdc9fac3d97d755f7cec26fbe059a47909611eb19ae341f0 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5nf 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bdc9fac3d97d755f7cec26fbe059a47909611eb19ae341f0 2 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bdc9fac3d97d755f7cec26fbe059a47909611eb19ae341f0 2 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bdc9fac3d97d755f7cec26fbe059a47909611eb19ae341f0 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5nf 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5nf 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.5nf 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1664663a4c6865974e64a06875327c86 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.F4u 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1664663a4c6865974e64a06875327c86 0 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1664663a4c6865974e64a06875327c86 0 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1664663a4c6865974e64a06875327c86 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.F4u 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.F4u 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.F4u 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:16.617 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=72143e4ac0d67017a80a7428e9c0cab6614155fdb706fc8bd10ab701b5869626 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.r19 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 72143e4ac0d67017a80a7428e9c0cab6614155fdb706fc8bd10ab701b5869626 3 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 72143e4ac0d67017a80a7428e9c0cab6614155fdb706fc8bd10ab701b5869626 3 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=72143e4ac0d67017a80a7428e9c0cab6614155fdb706fc8bd10ab701b5869626 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.r19 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.r19 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.r19 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3515497 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3515497 ']' 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.618 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ck5 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.qfS ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qfS 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.qiO 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Og6 ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Og6 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.aow 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.qPR ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qPR 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.5nf 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.F4u ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.F4u 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.r19 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:16.876 01:30:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:19.404 Waiting for block devices as requested 00:25:19.404 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:19.404 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:19.404 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:19.662 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:19.662 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:19.662 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:19.920 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:19.920 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:19.920 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:19.920 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:20.177 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:20.177 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:20.177 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:20.177 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:20.435 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:20.435 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:20.435 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:21.370 01:30:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:21.370 No valid GPT data, bailing 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:21.370 00:25:21.370 Discovery Log Number of Records 2, Generation counter 2 00:25:21.370 =====Discovery Log Entry 0====== 00:25:21.370 trtype: tcp 00:25:21.370 adrfam: ipv4 00:25:21.370 subtype: current discovery subsystem 00:25:21.370 treq: not specified, sq flow control disable supported 00:25:21.370 portid: 1 00:25:21.370 trsvcid: 4420 00:25:21.370 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:21.370 traddr: 10.0.0.1 00:25:21.370 eflags: none 00:25:21.370 sectype: none 00:25:21.370 =====Discovery Log Entry 1====== 00:25:21.370 trtype: tcp 00:25:21.370 adrfam: ipv4 00:25:21.370 subtype: nvme subsystem 00:25:21.370 treq: not specified, sq flow control disable supported 00:25:21.370 portid: 1 00:25:21.370 trsvcid: 4420 00:25:21.370 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:21.370 traddr: 10.0.0.1 00:25:21.370 eflags: none 00:25:21.370 sectype: none 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.370 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.371 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.371 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.371 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.371 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.371 nvme0n1 00:25:21.371 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.371 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.371 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.371 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.371 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.371 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.629 nvme0n1 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.629 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.630 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.887 nvme0n1 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.887 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.888 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.145 nvme0n1 00:25:22.145 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.145 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.145 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.145 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.145 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.145 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.145 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.145 01:30:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.145 01:30:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.145 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.145 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.145 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.145 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:22.145 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.145 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.145 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.145 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.145 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.146 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.403 nvme0n1 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.403 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.404 nvme0n1 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.404 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.665 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.666 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.666 nvme0n1 00:25:22.666 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.666 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.666 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.666 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.666 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.666 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.666 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.666 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.666 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.666 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.976 nvme0n1 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:22.976 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.977 01:30:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.235 nvme0n1 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.235 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.493 nvme0n1 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.493 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.751 nvme0n1 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.751 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.008 nvme0n1 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.008 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.009 01:30:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.267 nvme0n1 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.267 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.523 nvme0n1 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.523 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:24.779 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.780 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.037 nvme0n1 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.037 01:30:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.295 nvme0n1 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.295 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.860 nvme0n1 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.860 01:30:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.118 nvme0n1 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.118 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.377 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.635 nvme0n1 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.635 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.200 nvme0n1 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.200 01:30:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.456 nvme0n1 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.456 01:30:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.457 01:30:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.457 01:30:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.457 01:30:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.457 01:30:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.457 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.457 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.457 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 nvme0n1 00:25:28.020 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.020 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.020 01:30:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.020 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.020 01:30:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.020 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.278 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.842 nvme0n1 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.842 01:30:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.408 nvme0n1 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.408 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.974 nvme0n1 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.974 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.232 01:30:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.799 nvme0n1 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.799 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.800 nvme0n1 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.800 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.058 nvme0n1 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.058 01:30:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:31.058 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.059 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.316 nvme0n1 00:25:31.316 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.316 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.316 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.316 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.317 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.574 nvme0n1 00:25:31.574 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.574 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.574 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.574 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.574 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.574 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.574 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.575 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.833 nvme0n1 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.833 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.091 nvme0n1 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.092 01:30:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.350 nvme0n1 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.350 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.608 nvme0n1 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.608 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.866 nvme0n1 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.866 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.867 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.125 nvme0n1 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:33.125 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.126 01:30:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.384 nvme0n1 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.384 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.641 nvme0n1 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.641 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.899 nvme0n1 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:33.899 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.155 01:30:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.155 nvme0n1 00:25:34.155 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.155 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.412 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.413 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.670 nvme0n1 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.670 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.926 nvme0n1 00:25:34.926 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.926 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.926 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.926 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.926 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.926 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.183 01:31:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.440 nvme0n1 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.440 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.003 nvme0n1 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:36.003 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.004 01:31:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.261 nvme0n1 00:25:36.261 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.261 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.261 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.261 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.261 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.261 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.518 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.775 nvme0n1 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.775 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.776 01:31:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.340 nvme0n1 00:25:37.340 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.340 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.340 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.340 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.340 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.340 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.597 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.162 nvme0n1 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.162 01:31:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.162 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.726 nvme0n1 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.726 01:31:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.727 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.727 01:31:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.290 nvme0n1 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.290 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.547 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.110 nvme0n1 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.110 01:31:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.111 01:31:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.111 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.111 01:31:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.111 nvme0n1 00:25:40.111 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.111 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.111 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.111 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.111 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.111 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.367 nvme0n1 00:25:40.367 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.368 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.368 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.368 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.368 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.368 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.368 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.368 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.368 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.368 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.624 nvme0n1 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.624 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.888 nvme0n1 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.888 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.889 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.889 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.889 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.889 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.889 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.889 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.889 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.889 01:31:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.889 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.889 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.889 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.159 nvme0n1 00:25:41.159 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.159 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.159 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.159 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.159 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.159 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.159 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.159 01:31:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.159 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.159 01:31:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.159 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 nvme0n1 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.415 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.416 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.672 nvme0n1 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.672 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.673 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.928 nvme0n1 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:41.928 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.929 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.185 nvme0n1 00:25:42.186 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.186 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.186 01:31:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.186 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.186 01:31:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.186 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.443 nvme0n1 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.443 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.444 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.444 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.700 nvme0n1 00:25:42.700 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.700 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.700 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.700 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.701 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.958 nvme0n1 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.958 01:31:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.216 nvme0n1 00:25:43.216 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.216 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.216 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.216 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.216 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.216 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.473 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.731 nvme0n1 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.731 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.989 nvme0n1 00:25:43.989 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.989 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.989 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.989 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.990 01:31:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 nvme0n1 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.555 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.812 nvme0n1 00:25:44.812 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.813 01:31:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.377 nvme0n1 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.377 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.378 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.944 nvme0n1 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.944 01:31:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.202 nvme0n1 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODBjODllNzViMmM3ZTkyMDhiNmM3NTk3ZjE3NTIwNGThstEF: 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: ]] 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWYyMTQ5MzI4MGUxNGJmYmM2YmZkNWYxYzgxMWM1MzE2YjU0N2FlNDBiZGIwMTJjNmY0NjI0MjA0MjA5ODJmNHLFhP8=: 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.202 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.768 nvme0n1 00:25:46.768 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.768 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.768 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.768 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.768 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.768 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.025 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.025 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.025 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.026 01:31:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.590 nvme0n1 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjJjZjIzNzYyOWExOGY4NjBiNzkyYWRjMmRjNTU0ZmSTD7aY: 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: ]] 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmUzODUzNmZkN2EwNDNiOGZlOGRiN2NiNTljMzk3ZjmWNCeq: 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.590 01:31:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.153 nvme0n1 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmRjOWZhYzNkOTdkNzU1ZjdjZWMyNmZiZTA1OWE0NzkwOTYxMWViMTlhZTM0MWYwJf2+Rw==: 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: ]] 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTY2NDY2M2E0YzY4NjU5NzRlNjRhMDY4NzUzMjdjODaLM0W8: 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.153 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.717 nvme0n1 00:25:48.717 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.717 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.717 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.717 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.717 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.717 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.717 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.717 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.717 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.717 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzIxNDNlNGFjMGQ2NzAxN2E4MGE3NDI4ZTljMGNhYjY2MTQxNTVmZGI3MDZmYzhiZDEwYWI3MDFiNTg2OTYyNrbisAI=: 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.974 01:31:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.539 nvme0n1 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGM0NjdiY2MwNDU2YWViNjBjNzBkYTJjYjY5YTVjOTVjZTVkYTc3ZjljNGE1NDFhtQD/xA==: 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDAyMjA3ZGQzNmVkMDFiOTA3NDlhY2U5ZjBkZTcwODk5MTM0MmJkZmU5NDM4ZGIxhf9vQw==: 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.539 request: 00:25:49.539 { 00:25:49.539 "name": "nvme0", 00:25:49.539 "trtype": "tcp", 00:25:49.539 "traddr": "10.0.0.1", 00:25:49.539 "adrfam": "ipv4", 00:25:49.539 "trsvcid": "4420", 00:25:49.539 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:49.539 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:49.539 "prchk_reftag": false, 00:25:49.539 "prchk_guard": false, 00:25:49.539 "hdgst": false, 00:25:49.539 "ddgst": false, 00:25:49.539 "method": "bdev_nvme_attach_controller", 00:25:49.539 "req_id": 1 00:25:49.539 } 00:25:49.539 Got JSON-RPC error response 00:25:49.539 response: 00:25:49.539 { 00:25:49.539 "code": -5, 00:25:49.539 "message": "Input/output error" 00:25:49.539 } 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.539 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.540 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.798 request: 00:25:49.798 { 00:25:49.798 "name": "nvme0", 00:25:49.798 "trtype": "tcp", 00:25:49.798 "traddr": "10.0.0.1", 00:25:49.798 "adrfam": "ipv4", 00:25:49.798 "trsvcid": "4420", 00:25:49.798 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:49.798 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:49.798 "prchk_reftag": false, 00:25:49.798 "prchk_guard": false, 00:25:49.798 "hdgst": false, 00:25:49.798 "ddgst": false, 00:25:49.798 "dhchap_key": "key2", 00:25:49.798 "method": "bdev_nvme_attach_controller", 00:25:49.798 "req_id": 1 00:25:49.798 } 00:25:49.798 Got JSON-RPC error response 00:25:49.798 response: 00:25:49.798 { 00:25:49.798 "code": -5, 00:25:49.798 "message": "Input/output error" 00:25:49.798 } 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.798 request: 00:25:49.798 { 00:25:49.798 "name": "nvme0", 00:25:49.798 "trtype": "tcp", 00:25:49.798 "traddr": "10.0.0.1", 00:25:49.798 "adrfam": "ipv4", 00:25:49.798 "trsvcid": "4420", 00:25:49.798 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:49.798 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:49.798 "prchk_reftag": false, 00:25:49.798 "prchk_guard": false, 00:25:49.798 "hdgst": false, 00:25:49.798 "ddgst": false, 00:25:49.798 "dhchap_key": "key1", 00:25:49.798 "dhchap_ctrlr_key": "ckey2", 00:25:49.798 "method": "bdev_nvme_attach_controller", 00:25:49.798 "req_id": 1 00:25:49.798 } 00:25:49.798 Got JSON-RPC error response 00:25:49.798 response: 00:25:49.798 { 00:25:49.798 "code": -5, 00:25:49.798 "message": "Input/output error" 00:25:49.798 } 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:49.798 rmmod nvme_tcp 00:25:49.798 rmmod nvme_fabrics 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3515497 ']' 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3515497 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3515497 ']' 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3515497 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3515497 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3515497' 00:25:49.798 killing process with pid 3515497 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3515497 00:25:49.798 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3515497 00:25:50.056 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.056 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.056 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.056 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.056 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.056 01:31:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.056 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.056 01:31:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.586 01:31:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.586 01:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:52.586 01:31:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:52.586 01:31:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:52.586 01:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:52.586 01:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:52.586 01:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:52.587 01:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:52.587 01:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:52.587 01:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:52.587 01:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:52.587 01:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:52.587 01:31:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:55.115 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:55.115 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:56.489 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:56.489 01:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ck5 /tmp/spdk.key-null.qiO /tmp/spdk.key-sha256.aow /tmp/spdk.key-sha384.5nf /tmp/spdk.key-sha512.r19 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:56.489 01:31:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:59.017 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:59.017 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:59.017 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:59.017 00:25:59.017 real 0m49.321s 00:25:59.017 user 0m43.337s 00:25:59.017 sys 0m11.738s 00:25:59.017 01:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:59.017 01:31:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.017 ************************************ 00:25:59.017 END TEST nvmf_auth_host 00:25:59.017 ************************************ 00:25:59.017 01:31:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:59.017 01:31:24 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:25:59.017 01:31:24 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:59.017 01:31:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:59.017 01:31:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:59.017 01:31:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:59.017 ************************************ 00:25:59.017 START TEST nvmf_digest 00:25:59.017 ************************************ 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:59.017 * Looking for test storage... 00:25:59.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:59.017 01:31:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:04.273 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:04.273 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:04.273 Found net devices under 0000:86:00.0: cvl_0_0 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:04.273 Found net devices under 0000:86:00.1: cvl_0_1 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.273 01:31:29 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.273 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.273 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.273 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:04.273 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.273 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.273 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.273 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:04.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:26:04.273 00:26:04.273 --- 10.0.0.2 ping statistics --- 00:26:04.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.273 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:26:04.273 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:26:04.273 00:26:04.273 --- 10.0.0.1 ping statistics --- 00:26:04.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.274 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:04.274 ************************************ 00:26:04.274 START TEST nvmf_digest_clean 00:26:04.274 ************************************ 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3528375 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3528375 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3528375 ']' 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:04.274 01:31:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:04.530 [2024-07-16 01:31:30.277489] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:04.530 [2024-07-16 01:31:30.277532] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.530 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.530 [2024-07-16 01:31:30.337867] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.530 [2024-07-16 01:31:30.416786] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.530 [2024-07-16 01:31:30.416823] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.530 [2024-07-16 01:31:30.416830] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.530 [2024-07-16 01:31:30.416836] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.530 [2024-07-16 01:31:30.416841] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.530 [2024-07-16 01:31:30.416860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.091 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:05.091 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:05.091 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:05.091 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:05.091 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:05.347 null0 00:26:05.347 [2024-07-16 01:31:31.182795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.347 [2024-07-16 01:31:31.206980] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3528605 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3528605 /var/tmp/bperf.sock 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3528605 ']' 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:05.347 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:05.348 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:05.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:05.348 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:05.348 01:31:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:05.348 [2024-07-16 01:31:31.259212] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:05.348 [2024-07-16 01:31:31.259253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528605 ] 00:26:05.348 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.348 [2024-07-16 01:31:31.311974] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.604 [2024-07-16 01:31:31.384781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.175 01:31:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:06.175 01:31:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:06.175 01:31:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:06.175 01:31:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:06.175 01:31:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:06.465 01:31:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.465 01:31:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.730 nvme0n1 00:26:06.730 01:31:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:06.730 01:31:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:06.730 Running I/O for 2 seconds... 00:26:09.254 00:26:09.254 Latency(us) 00:26:09.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.254 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:09.254 nvme0n1 : 2.00 26273.87 102.63 0.00 0.00 4866.64 2184.53 14230.67 00:26:09.254 =================================================================================================================== 00:26:09.254 Total : 26273.87 102.63 0.00 0.00 4866.64 2184.53 14230.67 00:26:09.254 0 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:09.254 | select(.opcode=="crc32c") 00:26:09.254 | "\(.module_name) \(.executed)"' 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3528605 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3528605 ']' 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3528605 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3528605 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3528605' 00:26:09.254 killing process with pid 3528605 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3528605 00:26:09.254 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.254 00:26:09.254 Latency(us) 00:26:09.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.254 =================================================================================================================== 00:26:09.254 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.254 01:31:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3528605 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3529105 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3529105 /var/tmp/bperf.sock 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3529105 ']' 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:09.254 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:09.254 [2024-07-16 01:31:35.089247] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:09.254 [2024-07-16 01:31:35.089294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529105 ] 00:26:09.254 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.254 Zero copy mechanism will not be used. 00:26:09.254 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.254 [2024-07-16 01:31:35.145027] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.254 [2024-07-16 01:31:35.216412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.185 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:10.185 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:10.185 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:10.185 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:10.185 01:31:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:10.185 01:31:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.185 01:31:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.750 nvme0n1 00:26:10.750 01:31:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:10.750 01:31:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.750 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.750 Zero copy mechanism will not be used. 00:26:10.750 Running I/O for 2 seconds... 00:26:13.281 00:26:13.281 Latency(us) 00:26:13.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.281 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:13.281 nvme0n1 : 2.00 5462.89 682.86 0.00 0.00 2926.32 928.43 4649.94 00:26:13.281 =================================================================================================================== 00:26:13.281 Total : 5462.89 682.86 0.00 0.00 2926.32 928.43 4649.94 00:26:13.281 0 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:13.281 | select(.opcode=="crc32c") 00:26:13.281 | "\(.module_name) \(.executed)"' 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3529105 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3529105 ']' 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3529105 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3529105 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3529105' 00:26:13.281 killing process with pid 3529105 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3529105 00:26:13.281 Received shutdown signal, test time was about 2.000000 seconds 00:26:13.281 00:26:13.281 Latency(us) 00:26:13.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.281 =================================================================================================================== 00:26:13.281 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.281 01:31:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3529105 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3529801 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3529801 /var/tmp/bperf.sock 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3529801 ']' 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.281 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.281 [2024-07-16 01:31:39.100303] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:13.281 [2024-07-16 01:31:39.100357] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529801 ] 00:26:13.281 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.281 [2024-07-16 01:31:39.157253] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.281 [2024-07-16 01:31:39.224626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.215 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.215 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:14.215 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:14.215 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:14.215 01:31:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:14.215 01:31:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.215 01:31:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.779 nvme0n1 00:26:14.779 01:31:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:14.779 01:31:40 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.779 Running I/O for 2 seconds... 00:26:16.678 00:26:16.678 Latency(us) 00:26:16.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.678 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:16.678 nvme0n1 : 2.00 29143.71 113.84 0.00 0.00 4386.65 1927.07 6896.88 00:26:16.678 =================================================================================================================== 00:26:16.678 Total : 29143.71 113.84 0.00 0.00 4386.65 1927.07 6896.88 00:26:16.678 0 00:26:16.678 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:16.678 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:16.678 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:16.678 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:16.678 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:16.678 | select(.opcode=="crc32c") 00:26:16.678 | "\(.module_name) \(.executed)"' 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3529801 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3529801 ']' 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3529801 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3529801 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3529801' 00:26:16.936 killing process with pid 3529801 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3529801 00:26:16.936 Received shutdown signal, test time was about 2.000000 seconds 00:26:16.936 00:26:16.936 Latency(us) 00:26:16.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.936 =================================================================================================================== 00:26:16.936 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:16.936 01:31:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3529801 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3530497 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3530497 /var/tmp/bperf.sock 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3530497 ']' 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:17.194 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.194 [2024-07-16 01:31:43.051285] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:17.194 [2024-07-16 01:31:43.051332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530497 ] 00:26:17.194 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.194 Zero copy mechanism will not be used. 00:26:17.194 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.194 [2024-07-16 01:31:43.106206] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.194 [2024-07-16 01:31:43.172851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.126 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:18.126 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:18.126 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:18.126 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:18.126 01:31:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:18.126 01:31:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.126 01:31:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.384 nvme0n1 00:26:18.384 01:31:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:18.384 01:31:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.642 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.642 Zero copy mechanism will not be used. 00:26:18.642 Running I/O for 2 seconds... 00:26:20.542 00:26:20.542 Latency(us) 00:26:20.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.542 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:20.542 nvme0n1 : 2.00 7063.05 882.88 0.00 0.00 2261.69 1638.40 9112.62 00:26:20.542 =================================================================================================================== 00:26:20.542 Total : 7063.05 882.88 0.00 0.00 2261.69 1638.40 9112.62 00:26:20.542 0 00:26:20.543 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:20.543 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:20.543 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:20.543 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:20.543 | select(.opcode=="crc32c") 00:26:20.543 | "\(.module_name) \(.executed)"' 00:26:20.543 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3530497 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3530497 ']' 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3530497 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3530497 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:20.800 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:20.801 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3530497' 00:26:20.801 killing process with pid 3530497 00:26:20.801 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3530497 00:26:20.801 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.801 00:26:20.801 Latency(us) 00:26:20.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.801 =================================================================================================================== 00:26:20.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.801 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3530497 00:26:21.058 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3528375 00:26:21.058 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3528375 ']' 00:26:21.058 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3528375 00:26:21.058 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:21.058 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:21.058 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3528375 00:26:21.058 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:21.058 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:21.058 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3528375' 00:26:21.058 killing process with pid 3528375 00:26:21.058 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3528375 00:26:21.058 01:31:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3528375 00:26:21.058 00:26:21.058 real 0m16.822s 00:26:21.058 user 0m31.963s 00:26:21.058 sys 0m4.613s 00:26:21.058 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.058 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.058 ************************************ 00:26:21.059 END TEST nvmf_digest_clean 00:26:21.059 ************************************ 00:26:21.316 01:31:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:21.316 01:31:47 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:21.316 01:31:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:21.316 01:31:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:21.317 ************************************ 00:26:21.317 START TEST nvmf_digest_error 00:26:21.317 ************************************ 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3531220 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3531220 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3531220 ']' 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:21.317 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:21.317 [2024-07-16 01:31:47.162799] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:21.317 [2024-07-16 01:31:47.162838] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.317 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.317 [2024-07-16 01:31:47.219675] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.317 [2024-07-16 01:31:47.297020] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.317 [2024-07-16 01:31:47.297057] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.317 [2024-07-16 01:31:47.297064] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.317 [2024-07-16 01:31:47.297069] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.317 [2024-07-16 01:31:47.297075] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.317 [2024-07-16 01:31:47.297092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.248 [2024-07-16 01:31:47.991099] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.248 01:31:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.248 null0 00:26:22.248 [2024-07-16 01:31:48.079944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.248 [2024-07-16 01:31:48.104127] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3531347 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3531347 /var/tmp/bperf.sock 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3531347 ']' 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:22.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.248 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:22.248 [2024-07-16 01:31:48.151985] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:22.248 [2024-07-16 01:31:48.152027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531347 ] 00:26:22.248 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.248 [2024-07-16 01:31:48.207697] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.504 [2024-07-16 01:31:48.287450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.069 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:23.069 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:23.069 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:23.069 01:31:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:23.327 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:23.327 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.327 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.327 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.327 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.327 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.584 nvme0n1 00:26:23.584 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:23.584 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.584 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:23.584 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.584 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:23.584 01:31:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:23.842 Running I/O for 2 seconds... 00:26:23.842 [2024-07-16 01:31:49.627832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.627868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.627879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.639798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.639825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.639833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.647816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.647837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.647846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.657304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.657326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.657334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.666331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.666359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.666368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.675480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.675502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.675510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.684842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.684862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.684870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.693885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.693906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.693914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.702055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.702075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.702083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.711610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.711631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.711643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.721268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.721289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.721297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.731187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.731207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.731215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.739351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.842 [2024-07-16 01:31:49.739370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.842 [2024-07-16 01:31:49.739378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.842 [2024-07-16 01:31:49.748985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.843 [2024-07-16 01:31:49.749006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.843 [2024-07-16 01:31:49.749014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.843 [2024-07-16 01:31:49.758607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.843 [2024-07-16 01:31:49.758627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.843 [2024-07-16 01:31:49.758635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.843 [2024-07-16 01:31:49.766229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.843 [2024-07-16 01:31:49.766250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.843 [2024-07-16 01:31:49.766258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.843 [2024-07-16 01:31:49.775914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.843 [2024-07-16 01:31:49.775935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.843 [2024-07-16 01:31:49.775943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.843 [2024-07-16 01:31:49.786159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.843 [2024-07-16 01:31:49.786179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.843 [2024-07-16 01:31:49.786187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.843 [2024-07-16 01:31:49.795159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.843 [2024-07-16 01:31:49.795179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.843 [2024-07-16 01:31:49.795187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.843 [2024-07-16 01:31:49.803908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.843 [2024-07-16 01:31:49.803928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.843 [2024-07-16 01:31:49.803936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.843 [2024-07-16 01:31:49.812180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.843 [2024-07-16 01:31:49.812200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.843 [2024-07-16 01:31:49.812208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:23.843 [2024-07-16 01:31:49.821113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:23.843 [2024-07-16 01:31:49.821133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.843 [2024-07-16 01:31:49.821141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.830403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.830427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.830436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.840246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.840269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.840278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.849775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.849795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.849804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.857783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.857804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.857812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.867108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.867128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.867140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.877019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.877040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.877049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.885812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.885832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.885840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.897038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.897058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.897066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.905790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.905810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.905819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.917129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.917148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.917156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.929404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.929425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.929433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.942360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.942381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.942388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.950358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.950379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.950387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.962054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.962080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.962088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.973553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.101 [2024-07-16 01:31:49.973573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.101 [2024-07-16 01:31:49.973581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.101 [2024-07-16 01:31:49.985808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.102 [2024-07-16 01:31:49.985827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.102 [2024-07-16 01:31:49.985835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.102 [2024-07-16 01:31:49.997822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.102 [2024-07-16 01:31:49.997842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.102 [2024-07-16 01:31:49.997849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.102 [2024-07-16 01:31:50.010899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.102 [2024-07-16 01:31:50.010991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.102 [2024-07-16 01:31:50.011007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.102 [2024-07-16 01:31:50.021645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.102 [2024-07-16 01:31:50.021665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.102 [2024-07-16 01:31:50.021674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.102 [2024-07-16 01:31:50.030422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.102 [2024-07-16 01:31:50.030443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.102 [2024-07-16 01:31:50.030451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.102 [2024-07-16 01:31:50.040341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.102 [2024-07-16 01:31:50.040378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.102 [2024-07-16 01:31:50.040390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.102 [2024-07-16 01:31:50.049703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.102 [2024-07-16 01:31:50.049723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.102 [2024-07-16 01:31:50.049731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.102 [2024-07-16 01:31:50.058353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.102 [2024-07-16 01:31:50.058373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.102 [2024-07-16 01:31:50.058381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.102 [2024-07-16 01:31:50.072304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.102 [2024-07-16 01:31:50.072326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.102 [2024-07-16 01:31:50.072334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.102 [2024-07-16 01:31:50.080327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.102 [2024-07-16 01:31:50.080351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.102 [2024-07-16 01:31:50.080361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.092682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.092706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.092715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.103305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.103327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.103336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.112365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.112387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.112396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.123371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.123392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.123401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.135410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.135433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.135441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.145637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.145660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.145672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.154852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.154875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.154883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.163245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.163267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.163275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.174171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.174193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.174201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.186519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.186540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.186548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.198685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.198705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.198713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.210722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.210742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.210750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.223322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.223348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.223356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.234032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.234051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.234059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.245372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.245395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.245403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.254024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.254045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.254053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.263238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.263259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.263267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.271796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.271816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.361 [2024-07-16 01:31:50.271824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.361 [2024-07-16 01:31:50.280403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.361 [2024-07-16 01:31:50.280423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.362 [2024-07-16 01:31:50.280431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.362 [2024-07-16 01:31:50.289345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.362 [2024-07-16 01:31:50.289364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.362 [2024-07-16 01:31:50.289372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.362 [2024-07-16 01:31:50.298223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.362 [2024-07-16 01:31:50.298244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.362 [2024-07-16 01:31:50.298252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.362 [2024-07-16 01:31:50.307186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.362 [2024-07-16 01:31:50.307206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.362 [2024-07-16 01:31:50.307213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.362 [2024-07-16 01:31:50.317238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.362 [2024-07-16 01:31:50.317257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.362 [2024-07-16 01:31:50.317265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.362 [2024-07-16 01:31:50.326194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.362 [2024-07-16 01:31:50.326214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.362 [2024-07-16 01:31:50.326223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.362 [2024-07-16 01:31:50.335152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.362 [2024-07-16 01:31:50.335172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.362 [2024-07-16 01:31:50.335180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.362 [2024-07-16 01:31:50.344147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.362 [2024-07-16 01:31:50.344170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.362 [2024-07-16 01:31:50.344179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.619 [2024-07-16 01:31:50.353342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.619 [2024-07-16 01:31:50.353366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.619 [2024-07-16 01:31:50.353375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.619 [2024-07-16 01:31:50.362358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.619 [2024-07-16 01:31:50.362378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.619 [2024-07-16 01:31:50.362386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.619 [2024-07-16 01:31:50.371354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.619 [2024-07-16 01:31:50.371375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.619 [2024-07-16 01:31:50.371383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.619 [2024-07-16 01:31:50.380332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.619 [2024-07-16 01:31:50.380357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.619 [2024-07-16 01:31:50.380365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.619 [2024-07-16 01:31:50.389518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.619 [2024-07-16 01:31:50.389538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.619 [2024-07-16 01:31:50.389546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.619 [2024-07-16 01:31:50.398483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.619 [2024-07-16 01:31:50.398503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.619 [2024-07-16 01:31:50.398513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.619 [2024-07-16 01:31:50.407585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.619 [2024-07-16 01:31:50.407604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.619 [2024-07-16 01:31:50.407612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.619 [2024-07-16 01:31:50.416659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.619 [2024-07-16 01:31:50.416678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.619 [2024-07-16 01:31:50.416686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.619 [2024-07-16 01:31:50.425642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.425663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.425671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.436520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.436540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.436548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.448730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.448749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.448757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.461052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.461072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.461080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.469323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.469348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.469356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.481422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.481442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.481450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.493192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.493212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.493220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.505091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.505110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.505118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.517593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.517613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.517622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.529543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.529564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.529572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.539351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.539371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.539379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.550480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.550499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.550506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.559027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.559047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.559054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.569531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.569550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.569558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.577656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.577676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.577688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.588110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.588130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.588137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.620 [2024-07-16 01:31:50.600042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.620 [2024-07-16 01:31:50.600063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.620 [2024-07-16 01:31:50.600071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.876 [2024-07-16 01:31:50.608550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.876 [2024-07-16 01:31:50.608573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.876 [2024-07-16 01:31:50.608582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.876 [2024-07-16 01:31:50.619997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.876 [2024-07-16 01:31:50.620019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.876 [2024-07-16 01:31:50.620028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.876 [2024-07-16 01:31:50.631946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.876 [2024-07-16 01:31:50.631967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.876 [2024-07-16 01:31:50.631975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.876 [2024-07-16 01:31:50.642614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.876 [2024-07-16 01:31:50.642635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.876 [2024-07-16 01:31:50.642643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.876 [2024-07-16 01:31:50.650647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.876 [2024-07-16 01:31:50.650667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.876 [2024-07-16 01:31:50.650675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.876 [2024-07-16 01:31:50.662562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.662583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.662591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.670658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.670682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.670690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.681818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.681838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.681847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.693274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.693294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.693302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.703750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.703769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.703777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.712049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.712069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.712076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.724203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.724224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.724232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.737095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.737116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.737124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.747006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.747025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.747033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.757448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.757484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.757492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.766434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.766454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.766462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.777107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.777130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.777137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.788149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.788170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.788177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.796834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.796855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.796863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.808417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.808438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.808446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.816207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.816228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.816236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.826585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.826607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.826615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.835061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.835081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.835089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.845941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.845960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.845972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.877 [2024-07-16 01:31:50.854994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:24.877 [2024-07-16 01:31:50.855014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.877 [2024-07-16 01:31:50.855021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.866982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.867007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.867016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.879246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.879270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.879279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.890041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.890063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.890071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.898233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.898254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.898262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.909799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.909818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.909826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.921502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.921524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.921531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.932294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.932315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.932322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.940902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.940927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.940935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.951393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.951413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.951421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.958735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.958755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.958765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.970359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.970379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.970387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.981495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.981516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.981524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:50.993517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:50.993537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:50.993545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.005561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.005581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.005589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.016620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.016641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.016649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.024731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.024751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.024759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.036562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.036583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.036590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.047084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.047104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.047111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.055349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.055369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.055377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.067227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.067248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.067256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.077261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.077282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.077290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.085406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.085427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.085434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.094916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.094937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.094945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.103531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.103551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.103558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.134 [2024-07-16 01:31:51.114208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.134 [2024-07-16 01:31:51.114229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.134 [2024-07-16 01:31:51.114240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-07-16 01:31:51.125135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.392 [2024-07-16 01:31:51.125158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-07-16 01:31:51.125168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-07-16 01:31:51.133667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.392 [2024-07-16 01:31:51.133689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-07-16 01:31:51.133697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-07-16 01:31:51.144294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.392 [2024-07-16 01:31:51.144315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-07-16 01:31:51.144323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-07-16 01:31:51.153403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.392 [2024-07-16 01:31:51.153424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-07-16 01:31:51.153432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-07-16 01:31:51.162899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.392 [2024-07-16 01:31:51.162920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-07-16 01:31:51.162929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-07-16 01:31:51.171785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.392 [2024-07-16 01:31:51.171806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-07-16 01:31:51.171815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-07-16 01:31:51.180932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.392 [2024-07-16 01:31:51.180952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-07-16 01:31:51.180961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-07-16 01:31:51.191382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.392 [2024-07-16 01:31:51.191402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-07-16 01:31:51.191410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-07-16 01:31:51.199932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.199952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.199960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.209323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.209349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.209357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.218755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.218774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.218782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.228143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.228163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.228171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.238097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.238118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.238125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.246516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.246536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.246544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.255586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.255606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.255614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.265872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.265892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.265899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.277313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.277333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.277351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.285279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.285300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.285307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.294733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.294754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.294762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.303604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.303623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.303631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.312227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.312245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.312253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.321971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.321991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.321998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.331586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.331606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.331614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.339615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.339635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.339643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.349285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.349304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.349312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.358861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.358884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.358892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.367146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.367165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.367173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-07-16 01:31:51.378176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.393 [2024-07-16 01:31:51.378199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-07-16 01:31:51.378207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.387205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.387229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.387238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.398706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.398728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.398736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.406769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.406790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.406798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.418412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.418432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.418440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.428289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.428309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.428318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.437851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.437870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.437878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.445744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.445764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.445772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.455396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.455415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.455423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.465314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.465334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.465348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.473366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.473387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.473395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.484438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.484458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.484466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.492378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.492398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.492405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.651 [2024-07-16 01:31:51.503373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.651 [2024-07-16 01:31:51.503393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.651 [2024-07-16 01:31:51.503401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 [2024-07-16 01:31:51.515378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.652 [2024-07-16 01:31:51.515398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.652 [2024-07-16 01:31:51.515407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 [2024-07-16 01:31:51.526609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.652 [2024-07-16 01:31:51.526629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.652 [2024-07-16 01:31:51.526640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 [2024-07-16 01:31:51.535192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.652 [2024-07-16 01:31:51.535212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.652 [2024-07-16 01:31:51.535219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 [2024-07-16 01:31:51.546410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.652 [2024-07-16 01:31:51.546430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.652 [2024-07-16 01:31:51.546438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 [2024-07-16 01:31:51.556981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.652 [2024-07-16 01:31:51.557000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.652 [2024-07-16 01:31:51.557008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 [2024-07-16 01:31:51.565554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.652 [2024-07-16 01:31:51.565574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.652 [2024-07-16 01:31:51.565581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 [2024-07-16 01:31:51.576239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.652 [2024-07-16 01:31:51.576258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.652 [2024-07-16 01:31:51.576266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 [2024-07-16 01:31:51.587718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.652 [2024-07-16 01:31:51.587739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.652 [2024-07-16 01:31:51.587747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 [2024-07-16 01:31:51.599817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.652 [2024-07-16 01:31:51.599837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.652 [2024-07-16 01:31:51.599844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 [2024-07-16 01:31:51.612035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.652 [2024-07-16 01:31:51.612055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.652 [2024-07-16 01:31:51.612063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 [2024-07-16 01:31:51.619286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61e130) 00:26:25.652 [2024-07-16 01:31:51.619309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.652 [2024-07-16 01:31:51.619317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.652 00:26:25.652 Latency(us) 00:26:25.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.652 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:25.652 nvme0n1 : 2.00 25466.31 99.48 0.00 0.00 5021.15 2262.55 17351.44 00:26:25.652 =================================================================================================================== 00:26:25.652 Total : 25466.31 99.48 0.00 0.00 5021.15 2262.55 17351.44 00:26:25.652 0 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:25.910 | .driver_specific 00:26:25.910 | .nvme_error 00:26:25.910 | .status_code 00:26:25.910 | .command_transient_transport_error' 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3531347 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3531347 ']' 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3531347 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3531347 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3531347' 00:26:25.910 killing process with pid 3531347 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3531347 00:26:25.910 Received shutdown signal, test time was about 2.000000 seconds 00:26:25.910 00:26:25.910 Latency(us) 00:26:25.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.910 =================================================================================================================== 00:26:25.910 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.910 01:31:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3531347 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3531941 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3531941 /var/tmp/bperf.sock 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3531941 ']' 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:26.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:26.167 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.168 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:26.168 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:26.168 [2024-07-16 01:31:52.080595] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:26.168 [2024-07-16 01:31:52.080641] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531941 ] 00:26:26.168 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:26.168 Zero copy mechanism will not be used. 00:26:26.168 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.168 [2024-07-16 01:31:52.136285] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.425 [2024-07-16 01:31:52.205527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.990 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:26.990 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:26.990 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:26.990 01:31:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:27.248 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:27.248 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.248 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.248 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.248 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.248 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.506 nvme0n1 00:26:27.506 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:27.506 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.506 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.506 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.506 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:27.506 01:31:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:27.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:27.766 Zero copy mechanism will not be used. 00:26:27.766 Running I/O for 2 seconds... 00:26:27.766 [2024-07-16 01:31:53.521034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.766 [2024-07-16 01:31:53.521069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.766 [2024-07-16 01:31:53.521080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.766 [2024-07-16 01:31:53.526534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.766 [2024-07-16 01:31:53.526561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.766 [2024-07-16 01:31:53.526571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.766 [2024-07-16 01:31:53.532467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.766 [2024-07-16 01:31:53.532489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.766 [2024-07-16 01:31:53.532498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.766 [2024-07-16 01:31:53.538153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.766 [2024-07-16 01:31:53.538174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.766 [2024-07-16 01:31:53.538182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.766 [2024-07-16 01:31:53.543779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.766 [2024-07-16 01:31:53.543800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.766 [2024-07-16 01:31:53.543808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.766 [2024-07-16 01:31:53.549289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.766 [2024-07-16 01:31:53.549310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.766 [2024-07-16 01:31:53.549318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.766 [2024-07-16 01:31:53.554928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.766 [2024-07-16 01:31:53.554949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.766 [2024-07-16 01:31:53.554957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.766 [2024-07-16 01:31:53.559890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.766 [2024-07-16 01:31:53.559912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.766 [2024-07-16 01:31:53.559920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.766 [2024-07-16 01:31:53.565037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.766 [2024-07-16 01:31:53.565058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.766 [2024-07-16 01:31:53.565069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.766 [2024-07-16 01:31:53.570206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.766 [2024-07-16 01:31:53.570227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.766 [2024-07-16 01:31:53.570235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.766 [2024-07-16 01:31:53.575431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.766 [2024-07-16 01:31:53.575452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.766 [2024-07-16 01:31:53.575460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.581071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.581093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.581100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.586638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.586660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.586667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.592017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.592039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.592048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.598230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.598252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.598259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.604350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.604371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.604379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.610117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.610139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.610146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.615812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.615837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.615844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.621478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.621499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.621507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.626970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.626993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.627000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.632433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.632455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.632463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.637957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.637980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.637987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.643471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.643494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.643504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.648823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.648845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.648853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.654477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.654500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.654509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.659938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.659960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.659968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.665425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.665446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.665454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.671006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.671028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.671036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.676570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.676593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.676600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.682303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.682324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.682332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.685773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.685794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.685802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.693192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.693213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.693221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.700620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.700641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.700650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.708590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.708612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.708620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.716493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.716515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.716529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.722848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.722870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.722878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.729586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.729608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.729616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.735488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.735509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.735516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.740710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.740733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.767 [2024-07-16 01:31:53.740741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.767 [2024-07-16 01:31:53.746315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:27.767 [2024-07-16 01:31:53.746343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.768 [2024-07-16 01:31:53.746351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.751964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.751991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.752004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.757584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.757608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.757617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.763284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.763306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.763314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.768893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.768918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.768927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.774525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.774547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.774555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.780399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.780421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.780429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.786065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.786086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.786094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.791627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.791649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.791657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.797116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.797137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.797145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.802591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.802613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.802621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.808075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.808095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.808103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.813645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.813666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.813674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.818906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.818927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.818935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.823792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.823814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.823822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.829077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.829099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.829106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.834391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.834412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.834420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.839672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.839694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.839701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.845019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.845040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.845047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.850449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.850470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.850478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.855889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.855912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.855920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.861354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.861378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.861386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.866682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.866704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.866712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.871942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.871964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.871972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.877247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.877269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.877276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.882538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.029 [2024-07-16 01:31:53.882560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.029 [2024-07-16 01:31:53.882567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.029 [2024-07-16 01:31:53.887981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.888003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.888011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.893457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.893478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.893485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.898841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.898861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.898868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.905130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.905152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.905160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.911146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.911168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.911176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.918238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.918260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.918268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.925416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.925438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.925447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.931795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.931817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.931825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.937675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.937696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.937704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.943388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.943409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.943416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.949061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.949081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.949089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.954783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.954803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.954811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.961024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.961046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.961057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.968381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.968403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.968411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.974848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.974869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.974876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.978664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.978684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.978692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.984996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.985017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.985025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.992296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.992317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.992325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:53.999306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:53.999327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:53.999336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.030 [2024-07-16 01:31:54.006761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.030 [2024-07-16 01:31:54.006782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.030 [2024-07-16 01:31:54.006790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.302 [2024-07-16 01:31:54.014853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.302 [2024-07-16 01:31:54.014879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.302 [2024-07-16 01:31:54.014889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.302 [2024-07-16 01:31:54.022703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.302 [2024-07-16 01:31:54.022731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.302 [2024-07-16 01:31:54.022740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.302 [2024-07-16 01:31:54.029999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.302 [2024-07-16 01:31:54.030022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.302 [2024-07-16 01:31:54.030031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.302 [2024-07-16 01:31:54.035982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.302 [2024-07-16 01:31:54.036003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.036011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.041939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.041960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.041968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.047846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.047866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.047874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.054465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.054486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.054495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.059715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.059736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.059744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.065318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.065342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.065351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.070899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.070920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.070927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.076483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.076504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.076511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.081838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.081861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.081868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.087436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.087457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.087466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.093916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.093938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.093946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.099407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.099428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.099436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.104902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.104923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.104931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.110479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.110500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.110508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.116037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.116058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.116066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.121603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.121624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.121635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.126985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.127008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.127016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.132133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.132154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.132161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.137372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.137393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.137401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.142599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.142620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.142627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.147985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.148006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.148014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.153963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.153984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.153992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.159860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.159881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.159889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.166061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.166082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.166090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.172185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.172206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.172213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.178121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.178142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.178150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.183830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.183851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.183859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.189540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.189562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.189569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.195238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.195260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.195268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.200776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.200797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.200804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.205965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.205987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.205994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.211291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.211312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.211319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.216652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.216673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.216685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.222312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.222333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.222346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.227814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.227835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.227842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.233170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.233190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.233198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.238475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.238496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.238504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.243801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.243822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.243829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.249456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.249476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.249484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.255545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.255567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.255574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.261503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.261522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.261529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.267218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.267243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.267251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.272262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.272282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.272291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.275606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.275627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.275635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.281354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.281374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.281382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.303 [2024-07-16 01:31:54.287267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.303 [2024-07-16 01:31:54.287290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.303 [2024-07-16 01:31:54.287298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.293025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.293049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.293060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.298678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.298701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.298709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.304204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.304225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.304233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.309662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.309683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.309692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.315041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.315060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.315068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.320480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.320501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.320509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.325946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.325966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.325974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.331234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.331255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.331263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.336160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.336181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.336189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.341402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.341423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.341430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.346526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.346547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.346555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.351695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.351716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.351724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.356840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.356861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.356873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.362012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.362033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.362041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.367097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.367118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.367126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.372308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.372329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.372343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.377464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.377489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.377497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.382539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.382560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.571 [2024-07-16 01:31:54.382568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.571 [2024-07-16 01:31:54.387607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.571 [2024-07-16 01:31:54.387630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.387637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.392812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.392834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.392841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.398087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.398109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.398117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.403390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.403414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.403423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.408753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.408775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.408783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.413726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.413749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.413757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.419017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.419038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.419046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.424385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.424406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.424414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.429753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.429775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.429783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.435099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.435120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.435129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.440455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.440476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.440484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.445728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.445749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.445763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.451056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.451077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.451085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.456367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.456388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.456396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.461755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.461776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.461784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.467359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.467379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.467387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.472349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.472370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.472378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.477691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.477713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.477720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.483011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.483032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.483040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.489130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.489151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.489159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.496697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.496723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.496731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.501287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.501308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.501317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.507234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.507255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.507263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.513728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.513749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.513757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.519811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.519833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.519840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.526431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.526452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.526460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.533661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.533683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.533691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.541039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.541060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.541069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.548513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.548535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.572 [2024-07-16 01:31:54.548543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.572 [2024-07-16 01:31:54.556684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.572 [2024-07-16 01:31:54.556709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.573 [2024-07-16 01:31:54.556717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.831 [2024-07-16 01:31:54.563429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.831 [2024-07-16 01:31:54.563453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.831 [2024-07-16 01:31:54.563462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.831 [2024-07-16 01:31:54.569483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.831 [2024-07-16 01:31:54.569506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.831 [2024-07-16 01:31:54.569514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.831 [2024-07-16 01:31:54.575245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.831 [2024-07-16 01:31:54.575267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.831 [2024-07-16 01:31:54.575276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.831 [2024-07-16 01:31:54.580871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.831 [2024-07-16 01:31:54.580892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.831 [2024-07-16 01:31:54.580900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.831 [2024-07-16 01:31:54.586637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.831 [2024-07-16 01:31:54.586657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.586665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.592250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.592270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.592278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.597753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.597773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.597781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.603163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.603183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.603195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.608584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.608605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.608613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.613954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.613975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.613983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.619401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.619421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.619429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.624767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.624788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.624795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.630151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.630173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.630180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.635508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.635529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.635537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.640782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.640804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.640811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.646032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.646053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.646061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.651323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.651352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.651360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.656719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.656740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.656748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.662103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.662123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.662131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.667518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.667539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.667547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.672936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.672956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.672964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.678257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.678277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.678286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.683472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.683492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.683500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.688781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.688801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.688809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.693995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.694015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.694023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.699215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.699236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.699243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.704558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.704580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.704589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.709875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.709896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.709904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.715228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.715248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.715257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.720681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.720702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.720710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.726108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.726129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.726137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.731496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.731517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.731525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.736823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.736844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.832 [2024-07-16 01:31:54.736852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.832 [2024-07-16 01:31:54.742058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.832 [2024-07-16 01:31:54.742079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.742090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.747419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.747439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.747446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.752761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.752783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.752791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.758141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.758163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.758171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.763476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.763497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.763505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.768812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.768834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.768842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.774133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.774155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.774163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.779437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.779458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.779466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.784700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.784720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.784727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.790088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.790109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.790117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.795420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.795440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.795448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.800920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.800940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.800948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.806357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.806377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.806385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.811832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.811853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.811861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.833 [2024-07-16 01:31:54.817263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:28.833 [2024-07-16 01:31:54.817287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.833 [2024-07-16 01:31:54.817296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.822566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.822592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.822602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.827812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.827836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.827845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.833117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.833137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.833148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.838418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.838439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.838446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.843666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.843687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.843696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.849112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.849133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.849141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.854523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.854544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.854552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.860020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.860044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.860051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.865526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.865546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.865554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.870831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.870852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.870859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.876122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.876143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.876151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.881390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.881414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.881422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.886652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.886673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.886681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.891941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.891961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.891969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.897167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.897188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.897196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.902350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.902370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.902377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.907654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.907675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.907684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.912863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.912884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.912892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.918094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.918115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.918123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.923313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.923334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.923347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.928554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.928575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.928583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.933845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.933865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.933873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.939174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.939195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.939203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.944486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.944505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.944514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.949626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.949647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.949655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.954867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.954890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.954898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.960112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.960134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.960142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.965433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.965453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.965460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.970899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.970919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.970930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.976421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.976442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.976450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.981710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.981731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.981739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.987010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.987031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.987039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.992285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.992306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.992314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:54.997634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:54.997655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:54.997663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:55.002933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:55.002955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:55.002962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:55.008296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.093 [2024-07-16 01:31:55.008316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.093 [2024-07-16 01:31:55.008323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.093 [2024-07-16 01:31:55.013636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.013658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.013666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.019034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.019059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.019067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.024466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.024487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.024495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.029807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.029829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.029837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.035039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.035060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.035069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.040312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.040334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.040350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.045712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.045734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.045742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.050995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.051016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.051024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.056294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.056316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.056324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.061663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.061685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.061696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.066893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.066915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.066923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.072282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.072305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.072314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.094 [2024-07-16 01:31:55.077713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.094 [2024-07-16 01:31:55.077738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.094 [2024-07-16 01:31:55.077747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.083022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.083046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.083056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.088335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.088381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.088390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.093616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.093638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.093646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.098937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.098959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.098967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.104418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.104439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.104447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.109822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.109847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.109855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.115266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.115287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.115295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.120667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.120688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.120696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.125907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.125931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.125940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.131217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.131239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.131247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.136833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.136855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.136863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.143735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.143757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.143765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.150914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.150935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.150943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.158071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.158093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.158101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.165934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.165957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.165965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.173556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.173578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.173586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.181238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.181261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.181269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.188884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.188906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.188914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.196449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.196471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.196480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.204221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.204245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.353 [2024-07-16 01:31:55.204254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.353 [2024-07-16 01:31:55.211771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.353 [2024-07-16 01:31:55.211793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.211802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.219653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.219676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.219685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.227334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.227361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.227373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.234526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.234547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.234556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.242529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.242552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.242560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.250603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.250631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.250639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.258462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.258484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.258492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.266754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.266776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.266784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.275470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.275493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.275501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.283775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.283797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.283805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.292382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.292405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.292413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.300875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.300902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.300909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.308305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.308328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.308343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.315929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.315951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.315960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.322520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.322542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.322550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.329454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.329476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.329484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.354 [2024-07-16 01:31:55.337650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.354 [2024-07-16 01:31:55.337676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.354 [2024-07-16 01:31:55.337685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.613 [2024-07-16 01:31:55.345894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.613 [2024-07-16 01:31:55.345919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.613 [2024-07-16 01:31:55.345928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.613 [2024-07-16 01:31:55.353324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.613 [2024-07-16 01:31:55.353352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.613 [2024-07-16 01:31:55.353361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.613 [2024-07-16 01:31:55.360464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.613 [2024-07-16 01:31:55.360486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.613 [2024-07-16 01:31:55.360494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.613 [2024-07-16 01:31:55.368749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.613 [2024-07-16 01:31:55.368771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.613 [2024-07-16 01:31:55.368779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.613 [2024-07-16 01:31:55.377219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.613 [2024-07-16 01:31:55.377242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.613 [2024-07-16 01:31:55.377250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.613 [2024-07-16 01:31:55.385421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.613 [2024-07-16 01:31:55.385443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.613 [2024-07-16 01:31:55.385456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.613 [2024-07-16 01:31:55.393806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.613 [2024-07-16 01:31:55.393828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.613 [2024-07-16 01:31:55.393837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.613 [2024-07-16 01:31:55.401156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.613 [2024-07-16 01:31:55.401177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.613 [2024-07-16 01:31:55.401185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.613 [2024-07-16 01:31:55.409106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.613 [2024-07-16 01:31:55.409127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.613 [2024-07-16 01:31:55.409135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.613 [2024-07-16 01:31:55.417997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.613 [2024-07-16 01:31:55.418019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.613 [2024-07-16 01:31:55.418027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.613 [2024-07-16 01:31:55.425731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.613 [2024-07-16 01:31:55.425753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.425761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.433725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.433748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.433760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.442205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.442228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.442236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.449057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.449080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.449088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.456124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.456145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.456153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.461655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.461677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.461685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.467443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.467464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.467472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.473455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.473475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.473484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.479490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.479512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.479520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.486856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.486878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.486887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.494139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.494161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.494169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.501421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.501442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.501451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.509400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.509421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.509430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.614 [2024-07-16 01:31:55.517045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e04600) 00:26:29.614 [2024-07-16 01:31:55.517067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.614 [2024-07-16 01:31:55.517076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.614 00:26:29.614 Latency(us) 00:26:29.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.614 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:29.614 nvme0n1 : 2.00 5264.12 658.01 0.00 0.00 3036.06 674.86 9050.21 00:26:29.614 =================================================================================================================== 00:26:29.614 Total : 5264.12 658.01 0.00 0.00 3036.06 674.86 9050.21 00:26:29.614 0 00:26:29.614 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:29.614 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:29.614 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:29.614 | .driver_specific 00:26:29.614 | .nvme_error 00:26:29.614 | .status_code 00:26:29.614 | .command_transient_transport_error' 00:26:29.614 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 340 > 0 )) 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3531941 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3531941 ']' 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3531941 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3531941 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3531941' 00:26:29.873 killing process with pid 3531941 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3531941 00:26:29.873 Received shutdown signal, test time was about 2.000000 seconds 00:26:29.873 00:26:29.873 Latency(us) 00:26:29.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.873 =================================================================================================================== 00:26:29.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:29.873 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3531941 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3532640 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3532640 /var/tmp/bperf.sock 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3532640 ']' 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:30.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:30.131 01:31:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.131 [2024-07-16 01:31:55.986950] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:30.131 [2024-07-16 01:31:55.986993] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532640 ] 00:26:30.131 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.131 [2024-07-16 01:31:56.042146] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.131 [2024-07-16 01:31:56.108710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.065 01:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:31.065 01:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:31.065 01:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:31.065 01:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:31.065 01:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:31.065 01:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.065 01:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.065 01:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.065 01:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.065 01:31:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.631 nvme0n1 00:26:31.631 01:31:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:31.631 01:31:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.631 01:31:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.631 01:31:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.631 01:31:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:31.631 01:31:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:31.631 Running I/O for 2 seconds... 00:26:31.631 [2024-07-16 01:31:57.512937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.513095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.513125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.631 [2024-07-16 01:31:57.522347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.522495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.522518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.631 [2024-07-16 01:31:57.531726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.531875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.531894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.631 [2024-07-16 01:31:57.541061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.541205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.541223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.631 [2024-07-16 01:31:57.550405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.550551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.550570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.631 [2024-07-16 01:31:57.559758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.559900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.559918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.631 [2024-07-16 01:31:57.569117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.569261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.569278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.631 [2024-07-16 01:31:57.578422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.578565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.578583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.631 [2024-07-16 01:31:57.587780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.587922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.587940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.631 [2024-07-16 01:31:57.597072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.597213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.597230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.631 [2024-07-16 01:31:57.606375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.606517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.606534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.631 [2024-07-16 01:31:57.615703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.631 [2024-07-16 01:31:57.615849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.631 [2024-07-16 01:31:57.615870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.625306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.625462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.625483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.634884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.635028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.635047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.644432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.644578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.644599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.653918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.654062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.654079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.663394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.663538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.663555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.672994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.673139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.673157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.682466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.682608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.682626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.691830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.691972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.691989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.701378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.701524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.701544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.710953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.711100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.711118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.720495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.720636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.720654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.890 [2024-07-16 01:31:57.729888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.890 [2024-07-16 01:31:57.730034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.890 [2024-07-16 01:31:57.730051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.739212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.739354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.739371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.748487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.748628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.748647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.757811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.757955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.757972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.767154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.767303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.767320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.776602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.776746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.776763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.785912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.786054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.786072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.795212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.795355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.795373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.804500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.804641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.804658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.813813] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.813957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.813974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.823117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.823261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.823279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.832455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.832601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.832618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.841780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.841921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.841938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.851075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.851218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.851235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.860392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.860534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.860551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.891 [2024-07-16 01:31:57.869697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:31.891 [2024-07-16 01:31:57.869837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.891 [2024-07-16 01:31:57.869855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.879200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.879347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.879369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.888682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.888825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.888848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.897972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.898114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.898133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.907350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.907496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.907515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.916942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.917087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.917106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.926352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.926497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.926515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.935661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.935804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.935822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.944945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.945089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.945106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.954246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.954395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.954415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.963582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.963725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.963743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.972934] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.973076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.973099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.982254] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.982402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.982421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:57.991580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:57.991722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:57.991739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:58.000871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:58.001012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:58.001030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:58.010181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:58.010323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:58.010347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:58.019528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:58.019669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:58.019686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:58.028966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:58.029108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:58.029125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:58.038287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.150 [2024-07-16 01:31:58.038436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.150 [2024-07-16 01:31:58.038454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.150 [2024-07-16 01:31:58.047755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.151 [2024-07-16 01:31:58.047901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.151 [2024-07-16 01:31:58.047919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.151 [2024-07-16 01:31:58.057090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.151 [2024-07-16 01:31:58.057234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.151 [2024-07-16 01:31:58.057251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.151 [2024-07-16 01:31:58.066412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.151 [2024-07-16 01:31:58.066555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.151 [2024-07-16 01:31:58.066572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.151 [2024-07-16 01:31:58.075717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.151 [2024-07-16 01:31:58.075856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.151 [2024-07-16 01:31:58.075873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.151 [2024-07-16 01:31:58.085009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.151 [2024-07-16 01:31:58.085150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.151 [2024-07-16 01:31:58.085167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.151 [2024-07-16 01:31:58.094299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.151 [2024-07-16 01:31:58.094451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.151 [2024-07-16 01:31:58.094468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.151 [2024-07-16 01:31:58.103589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.151 [2024-07-16 01:31:58.103731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.151 [2024-07-16 01:31:58.103748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.151 [2024-07-16 01:31:58.112892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.151 [2024-07-16 01:31:58.113033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.151 [2024-07-16 01:31:58.113051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.151 [2024-07-16 01:31:58.122210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.151 [2024-07-16 01:31:58.122353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.151 [2024-07-16 01:31:58.122370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.151 [2024-07-16 01:31:58.131501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.151 [2024-07-16 01:31:58.131642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.151 [2024-07-16 01:31:58.131660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.409 [2024-07-16 01:31:58.141055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.141200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.141221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.150474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.150615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.150634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.159780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.159921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.159939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.169114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.169253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.169271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.178423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.178565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.178582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.187869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.188012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.188030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.197157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.197299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.197317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.206464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.206606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.206623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.215783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.215926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.215948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.225278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.225426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.225444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.234580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.234726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.234744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.243894] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.244034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.244051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.253195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.253341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.253358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.262522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.262662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.262679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.271830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.271973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.271990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.281255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.281404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.281422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.290572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.290714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.290731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.299864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.300010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.300029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.309174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.309319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.309340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.318467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.318611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.318629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.327750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.327892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.327909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.337057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.337199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.337216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.346360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.346501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.346519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.355646] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.355786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.355804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.364948] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.365089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.365106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.374293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.374441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.374458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.383588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.383731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.383749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.410 [2024-07-16 01:31:58.392914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.410 [2024-07-16 01:31:58.393060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.410 [2024-07-16 01:31:58.393079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.402487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.402631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.402652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.411861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.412002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.412024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.421143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.421287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.421305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.430441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.430584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.430602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.439758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.439899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.439917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.449026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.449167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.449184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.458362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.458503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.458523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.467662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.467801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.467818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.476957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.477097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.477114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.486251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.486399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.486418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.495540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.495681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.495698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.504815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.669 [2024-07-16 01:31:58.504957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.669 [2024-07-16 01:31:58.504975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.669 [2024-07-16 01:31:58.514094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.514236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.514254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.523397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.523538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.523555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.532818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.532961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.532979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.542143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.542287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.542304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.551437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.551580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.551597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.560767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.560909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.560926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.570100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.570243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.570260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.579464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.579606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.579622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.588773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.588912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.588929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.598169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.598314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.598333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.607517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.607658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.607675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.616808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.616950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.616967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.626080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.626221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.626238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.635397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.635541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.635559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.644673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.644814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.644830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.670 [2024-07-16 01:31:58.654024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.670 [2024-07-16 01:31:58.654171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.670 [2024-07-16 01:31:58.654191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.663615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.663754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.663775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.672984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.673127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.673146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.682302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.682452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.682470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.691608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.691752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.691770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.701043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.701189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.701207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.710608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.710754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.710773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.720168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.720313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.720331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.729705] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.729852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.729869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.739472] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.739617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.739634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.749000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.749145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.749163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.758592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.758738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.758756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.768163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.768307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.768324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.777693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.777839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.777858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.787242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.787393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.787414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.796760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.796900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.796917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.806045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.806186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.806203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.815350] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.815493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.929 [2024-07-16 01:31:58.815510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.929 [2024-07-16 01:31:58.824627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.929 [2024-07-16 01:31:58.824771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-16 01:31:58.824788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.930 [2024-07-16 01:31:58.833890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.930 [2024-07-16 01:31:58.834033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-16 01:31:58.834050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.930 [2024-07-16 01:31:58.843211] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.930 [2024-07-16 01:31:58.843354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-16 01:31:58.843371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.930 [2024-07-16 01:31:58.852482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.930 [2024-07-16 01:31:58.852623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-16 01:31:58.852640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.930 [2024-07-16 01:31:58.861772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.930 [2024-07-16 01:31:58.861911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-16 01:31:58.861929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.930 [2024-07-16 01:31:58.871086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.930 [2024-07-16 01:31:58.871232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-16 01:31:58.871249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.930 [2024-07-16 01:31:58.880372] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.930 [2024-07-16 01:31:58.880513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-16 01:31:58.880530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.930 [2024-07-16 01:31:58.889661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.930 [2024-07-16 01:31:58.889803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-16 01:31:58.889821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.930 [2024-07-16 01:31:58.898949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.930 [2024-07-16 01:31:58.899089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-16 01:31:58.899106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:32.930 [2024-07-16 01:31:58.908228] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:32.930 [2024-07-16 01:31:58.908374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.930 [2024-07-16 01:31:58.908392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.189 [2024-07-16 01:31:58.917770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.189 [2024-07-16 01:31:58.917911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.189 [2024-07-16 01:31:58.917932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.189 [2024-07-16 01:31:58.927209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.189 [2024-07-16 01:31:58.927351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.189 [2024-07-16 01:31:58.927373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.189 [2024-07-16 01:31:58.936539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.189 [2024-07-16 01:31:58.936681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.189 [2024-07-16 01:31:58.936699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.189 [2024-07-16 01:31:58.945835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.189 [2024-07-16 01:31:58.945977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.189 [2024-07-16 01:31:58.945995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.189 [2024-07-16 01:31:58.955074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.189 [2024-07-16 01:31:58.955217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.189 [2024-07-16 01:31:58.955234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.189 [2024-07-16 01:31:58.964398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.189 [2024-07-16 01:31:58.964539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.189 [2024-07-16 01:31:58.964557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.189 [2024-07-16 01:31:58.973721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.189 [2024-07-16 01:31:58.973865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.189 [2024-07-16 01:31:58.973882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.189 [2024-07-16 01:31:58.983018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.189 [2024-07-16 01:31:58.983158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.189 [2024-07-16 01:31:58.983175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:58.992300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:58.992448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:58.992466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.001586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.001729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.001747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.010869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.011010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.011027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.020154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.020295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.020313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.029451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.029593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.029613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.038889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.039032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.039049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.048315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.048466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.048483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.057624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.057766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.057783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.066897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.067039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.067056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.076171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.076312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.076330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.085457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.085597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.085615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.094748] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.094890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.094907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.104024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.104166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.104183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.113324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.113476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.113493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.122630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.122772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.122789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.131898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.132040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.132058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.141204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.141351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.141369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.150483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.150624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.150641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.159779] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.159922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.159939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.190 [2024-07-16 01:31:59.169126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.190 [2024-07-16 01:31:59.169266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.190 [2024-07-16 01:31:59.169283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.449 [2024-07-16 01:31:59.178640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.449 [2024-07-16 01:31:59.178783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.449 [2024-07-16 01:31:59.178804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.449 [2024-07-16 01:31:59.188119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.449 [2024-07-16 01:31:59.188262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.449 [2024-07-16 01:31:59.188282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.449 [2024-07-16 01:31:59.197520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.449 [2024-07-16 01:31:59.197664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.449 [2024-07-16 01:31:59.197683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.449 [2024-07-16 01:31:59.206980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.449 [2024-07-16 01:31:59.207127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.449 [2024-07-16 01:31:59.207146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.449 [2024-07-16 01:31:59.216346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.449 [2024-07-16 01:31:59.216488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.449 [2024-07-16 01:31:59.216506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.449 [2024-07-16 01:31:59.225826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.449 [2024-07-16 01:31:59.225969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.449 [2024-07-16 01:31:59.225986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.449 [2024-07-16 01:31:59.235093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.449 [2024-07-16 01:31:59.235235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.449 [2024-07-16 01:31:59.235254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.449 [2024-07-16 01:31:59.244408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.449 [2024-07-16 01:31:59.244551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.449 [2024-07-16 01:31:59.244568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.449 [2024-07-16 01:31:59.253891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.254034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.254051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.263235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.263384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.263402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.272556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.272710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.272730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.281846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.281990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.282008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.291314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.291465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.291483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.300644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.300784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.300802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.309986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.310130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.310148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.319295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.319458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.319475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.328743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.328890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.328908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.338199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.338345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.338363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.347503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.347646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.347663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.356783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.356925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.356945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.366113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.366259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.366276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.375547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.375691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.375709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.385030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.385174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.385192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.394367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.394509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.394528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.403658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.403799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.403817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.412937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.413080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.413097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.422266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.422416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.422434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.450 [2024-07-16 01:31:59.431547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.450 [2024-07-16 01:31:59.431689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.450 [2024-07-16 01:31:59.431708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.710 [2024-07-16 01:31:59.441157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.710 [2024-07-16 01:31:59.441307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.710 [2024-07-16 01:31:59.441328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.710 [2024-07-16 01:31:59.450530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.710 [2024-07-16 01:31:59.450672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.710 [2024-07-16 01:31:59.450691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.710 [2024-07-16 01:31:59.459808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.710 [2024-07-16 01:31:59.459951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.710 [2024-07-16 01:31:59.459969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.710 [2024-07-16 01:31:59.469096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.710 [2024-07-16 01:31:59.469236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.710 [2024-07-16 01:31:59.469254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.710 [2024-07-16 01:31:59.478374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.710 [2024-07-16 01:31:59.478517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.710 [2024-07-16 01:31:59.478534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.710 [2024-07-16 01:31:59.487697] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.710 [2024-07-16 01:31:59.487839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.710 [2024-07-16 01:31:59.487858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.710 [2024-07-16 01:31:59.496985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.710 [2024-07-16 01:31:59.497128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.710 [2024-07-16 01:31:59.497146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.710 [2024-07-16 01:31:59.506268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a44c0) with pdu=0x2000190fdeb0 00:26:33.710 [2024-07-16 01:31:59.506417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.710 [2024-07-16 01:31:59.506435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.710 00:26:33.710 Latency(us) 00:26:33.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.710 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:33.710 nvme0n1 : 2.00 27280.20 106.56 0.00 0.00 4684.00 2855.50 9799.19 00:26:33.710 =================================================================================================================== 00:26:33.710 Total : 27280.20 106.56 0.00 0.00 4684.00 2855.50 9799.19 00:26:33.710 0 00:26:33.710 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:33.710 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:33.710 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:33.710 | .driver_specific 00:26:33.710 | .nvme_error 00:26:33.710 | .status_code 00:26:33.710 | .command_transient_transport_error' 00:26:33.710 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3532640 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3532640 ']' 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3532640 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3532640 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3532640' 00:26:33.969 killing process with pid 3532640 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3532640 00:26:33.969 Received shutdown signal, test time was about 2.000000 seconds 00:26:33.969 00:26:33.969 Latency(us) 00:26:33.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.969 =================================================================================================================== 00:26:33.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3532640 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:33.969 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:33.970 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:33.970 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3533339 00:26:33.970 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3533339 /var/tmp/bperf.sock 00:26:33.970 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:33.970 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3533339 ']' 00:26:33.970 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:33.970 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:33.970 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:33.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:33.970 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:33.970 01:31:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.227 [2024-07-16 01:31:59.978971] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:34.227 [2024-07-16 01:31:59.979022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3533339 ] 00:26:34.227 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:34.227 Zero copy mechanism will not be used. 00:26:34.227 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.227 [2024-07-16 01:32:00.036154] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.227 [2024-07-16 01:32:00.107542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.160 01:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:35.160 01:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:35.160 01:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.160 01:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.160 01:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:35.160 01:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.160 01:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.160 01:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.160 01:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.160 01:32:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.418 nvme0n1 00:26:35.418 01:32:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:35.418 01:32:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.418 01:32:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.418 01:32:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.418 01:32:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:35.418 01:32:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:35.418 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.418 Zero copy mechanism will not be used. 00:26:35.418 Running I/O for 2 seconds... 00:26:35.418 [2024-07-16 01:32:01.317694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.318071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.318099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.322458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.322838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.322866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.327571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.327927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.327948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.332915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.333308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.333328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.338874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.339247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.339268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.344923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.345289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.345310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.350274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.350659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.350679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.355586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.355959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.355979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.360714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.361086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.361105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.365542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.365905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.365925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.370588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.370957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.370976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.376046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.376446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.376465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.381617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.381986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.382005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.387590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.387954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.387974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.393200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.393259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.393276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.399126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.418 [2024-07-16 01:32:01.399485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.418 [2024-07-16 01:32:01.399504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.418 [2024-07-16 01:32:01.405317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.419 [2024-07-16 01:32:01.405430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.419 [2024-07-16 01:32:01.405456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.413091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.413479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.677 [2024-07-16 01:32:01.413502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.419784] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.420161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.677 [2024-07-16 01:32:01.420180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.426619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.427010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.677 [2024-07-16 01:32:01.427030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.433676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.434050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.677 [2024-07-16 01:32:01.434069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.440914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.441288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.677 [2024-07-16 01:32:01.441308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.447694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.448084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.677 [2024-07-16 01:32:01.448103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.454801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.455200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.677 [2024-07-16 01:32:01.455219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.462000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.462385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.677 [2024-07-16 01:32:01.462405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.469161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.469522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.677 [2024-07-16 01:32:01.469541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.476382] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.476764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.677 [2024-07-16 01:32:01.476783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.483555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.483929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.677 [2024-07-16 01:32:01.483951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.677 [2024-07-16 01:32:01.490268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.677 [2024-07-16 01:32:01.490640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.490660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.497218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.497582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.497601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.504199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.504587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.504606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.511691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.512085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.512104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.519049] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.519120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.519137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.526966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.527353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.527373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.534690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.535062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.535081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.540955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.541311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.541330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.546820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.547200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.547219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.552333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.552397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.552415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.558480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.558856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.558875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.564600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.564971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.565007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.570659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.571046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.571064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.576576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.576939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.576959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.582639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.583018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.583037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.588421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.588779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.588798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.594626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.595017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.595036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.600419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.600784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.600803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.606475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.606844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.606863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.612421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.612786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.612805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.618580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.618939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.618958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.624440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.624827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.624846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.630512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.630880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.630898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.636179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.636535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.636554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.641324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.641694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.641713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.647266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.647657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.647683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.653297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.653662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.678 [2024-07-16 01:32:01.653681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.678 [2024-07-16 01:32:01.658902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.678 [2024-07-16 01:32:01.659278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.679 [2024-07-16 01:32:01.659297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.937 [2024-07-16 01:32:01.664619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.937 [2024-07-16 01:32:01.665015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.937 [2024-07-16 01:32:01.665038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.937 [2024-07-16 01:32:01.670000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.937 [2024-07-16 01:32:01.670367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.937 [2024-07-16 01:32:01.670389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.937 [2024-07-16 01:32:01.674886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.937 [2024-07-16 01:32:01.675263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.937 [2024-07-16 01:32:01.675283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.937 [2024-07-16 01:32:01.679832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.937 [2024-07-16 01:32:01.680202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.937 [2024-07-16 01:32:01.680221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.937 [2024-07-16 01:32:01.684940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.937 [2024-07-16 01:32:01.685298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.937 [2024-07-16 01:32:01.685317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.937 [2024-07-16 01:32:01.690026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.937 [2024-07-16 01:32:01.690400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.937 [2024-07-16 01:32:01.690419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.937 [2024-07-16 01:32:01.695260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.937 [2024-07-16 01:32:01.695638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.937 [2024-07-16 01:32:01.695656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.937 [2024-07-16 01:32:01.700438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.937 [2024-07-16 01:32:01.700818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.937 [2024-07-16 01:32:01.700837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.937 [2024-07-16 01:32:01.705577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.705962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.705981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.710269] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.710642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.710661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.714918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.715286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.715304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.719553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.719924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.719943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.724189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.724558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.724576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.728806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.729155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.729174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.733455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.733828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.733850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.738054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.738415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.738435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.742623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.742994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.743013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.747334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.747715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.747734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.751897] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.752266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.752285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.756423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.756775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.756794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.760909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.761277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.761296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.765471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.765827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.765846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.770053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.770428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.770447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.774577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.774947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.774966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.779265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.779649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.779668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.784736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.785126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.785147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.789574] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.789926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.789945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.794175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.794550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.794569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.798812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.799177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.799196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.803438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.803825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.803844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.808015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.808378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.808396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.812576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.812955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.812974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.817181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.817555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.817573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.821770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.822129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.822147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.826446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.826836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.826856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.831100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.831460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.831479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.835631] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.836002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.836021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.840228] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.840600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.840619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.844780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.845134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.845153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.849348] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.849696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.849716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.853899] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.854268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.854290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.858486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.858853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.858873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.863062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.938 [2024-07-16 01:32:01.863425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.938 [2024-07-16 01:32:01.863444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.938 [2024-07-16 01:32:01.868038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.939 [2024-07-16 01:32:01.868427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.939 [2024-07-16 01:32:01.868446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.939 [2024-07-16 01:32:01.873366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.939 [2024-07-16 01:32:01.873733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.939 [2024-07-16 01:32:01.873753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.939 [2024-07-16 01:32:01.878561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.939 [2024-07-16 01:32:01.878930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.939 [2024-07-16 01:32:01.878949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.939 [2024-07-16 01:32:01.883671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.939 [2024-07-16 01:32:01.884037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.939 [2024-07-16 01:32:01.884055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.939 [2024-07-16 01:32:01.889585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.939 [2024-07-16 01:32:01.889964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.939 [2024-07-16 01:32:01.889983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.939 [2024-07-16 01:32:01.895366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.939 [2024-07-16 01:32:01.895751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.939 [2024-07-16 01:32:01.895770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.939 [2024-07-16 01:32:01.901115] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.939 [2024-07-16 01:32:01.901508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.939 [2024-07-16 01:32:01.901526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.939 [2024-07-16 01:32:01.907110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.939 [2024-07-16 01:32:01.907469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.939 [2024-07-16 01:32:01.907488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.939 [2024-07-16 01:32:01.912952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.939 [2024-07-16 01:32:01.913324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.939 [2024-07-16 01:32:01.913348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.939 [2024-07-16 01:32:01.918829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:35.939 [2024-07-16 01:32:01.919206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.939 [2024-07-16 01:32:01.919224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.924677] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:01.925039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:01.925062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.930736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:01.931108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:01.931130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.936812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:01.937172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:01.937193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.942639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:01.943008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:01.943027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.948343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:01.948716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:01.948736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.954403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:01.954769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:01.954788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.960274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:01.960651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:01.960671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.967467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:01.967852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:01.967871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.975841] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:01.976226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:01.976245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.983854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:01.984229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:01.984249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.992418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:01.992799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:01.992819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:01.999872] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.000234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.000254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.007360] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.007731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.007750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.015026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.015416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.015438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.022891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.023286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.023306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.030527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.030911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.030930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.037926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.038318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.038343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.045213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.045575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.045595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.051963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.052351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.052371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.058837] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.059210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.059229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.066010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.066418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.066437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.072910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.073283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.073302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.080315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.080691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.080711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.087520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.087895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.087915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.093844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.093936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.093954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.099504] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.099870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.099889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.105842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.106212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.106231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.111198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.111572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.111591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.116434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.116818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.116837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.122301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.122698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.122717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.128583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.128643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.128663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.134586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.134949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.134968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.140790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.141170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.141190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.198 [2024-07-16 01:32:02.146608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.198 [2024-07-16 01:32:02.146977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.198 [2024-07-16 01:32:02.146996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.199 [2024-07-16 01:32:02.152989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.199 [2024-07-16 01:32:02.153373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.199 [2024-07-16 01:32:02.153393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.199 [2024-07-16 01:32:02.158735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.199 [2024-07-16 01:32:02.159095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.199 [2024-07-16 01:32:02.159115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.199 [2024-07-16 01:32:02.164579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.199 [2024-07-16 01:32:02.164937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.199 [2024-07-16 01:32:02.164956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.199 [2024-07-16 01:32:02.170808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.199 [2024-07-16 01:32:02.171180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.199 [2024-07-16 01:32:02.171199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.199 [2024-07-16 01:32:02.176914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.199 [2024-07-16 01:32:02.177280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.199 [2024-07-16 01:32:02.177299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.199 [2024-07-16 01:32:02.182776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.199 [2024-07-16 01:32:02.183174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.199 [2024-07-16 01:32:02.183209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.188947] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.189358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.189381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.194815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.195178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.195198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.200685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.201037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.201057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.206850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.207218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.207238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.212534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.212905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.212924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.218713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.219089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.219109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.225003] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.225371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.225390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.230887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.231250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.231269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.236888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.237260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.237280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.242621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.242988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.243008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.248030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.248402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.248421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.255423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.255794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.255813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.262479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.262852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.262871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.268833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.268912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.268930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.275847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.276213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.276233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.282651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.283014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.283033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.288741] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.289133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.289156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.294887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.295258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.295278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.301072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.301472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.301491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.306448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.306809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.306828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.311230] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.311604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.311623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.316032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.316402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.456 [2024-07-16 01:32:02.316421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.456 [2024-07-16 01:32:02.320879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.456 [2024-07-16 01:32:02.321240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.321257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.325549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.325919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.325938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.330411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.330788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.330806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.335207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.335576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.335597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.339990] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.340357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.340377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.344676] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.345048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.345068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.349482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.349879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.349898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.354395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.354774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.354793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.359187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.359559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.359577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.364583] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.364962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.364981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.369835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.370207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.370225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.374808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.375159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.375178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.379825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.380166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.380185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.385279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.385631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.385650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.390956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.391307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.391326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.396566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.396914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.396932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.402526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.402859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.402879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.408418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.408846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.408865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.415972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.416419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.416438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.422552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.422953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.422972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.428450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.428800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.428823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.434399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.434761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.434781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.457 [2024-07-16 01:32:02.440560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.457 [2024-07-16 01:32:02.440915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.457 [2024-07-16 01:32:02.440937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.446707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.447087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.447110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.453227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.453640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.453661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.458898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.459242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.459262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.463619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.463974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.463993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.468223] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.468578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.468597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.472860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.473210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.473230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.477450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.477800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.477819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.481935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.482287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.482306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.486458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.486812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.486832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.490929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.491259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.491277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.495415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.495780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.495800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.499941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.500293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.500312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.504482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.504837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.504856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.508999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.509344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.509363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.513438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.513780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.513798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.517989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.518315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.518335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.522442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.522789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.522807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.526949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.527284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.527304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.531236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.531592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.531611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.535830] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.536182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.716 [2024-07-16 01:32:02.536204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.716 [2024-07-16 01:32:02.540308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.716 [2024-07-16 01:32:02.540662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.540681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.544824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.545165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.545184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.549172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.549529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.549547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.553551] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.553885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.553907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.557980] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.558328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.558352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.562289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.562649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.562668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.566789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.567126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.567145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.571297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.571646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.571665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.575827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.576170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.576188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.580417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.580765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.580784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.584917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.585268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.585287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.589388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.589726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.589745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.594020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.594373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.594392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.598885] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.599238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.599256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.603742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.604090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.604109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.608746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.609093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.609112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.613742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.614081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.614099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.619014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.619356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.619375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.624397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.624744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.624763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.629295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.629640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.629659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.634637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.634978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.634999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.640851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.641186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.641206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.648386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.648803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.648823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.655827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.656299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.656317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.663684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.664110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.664129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.670013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.670327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.670351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.676202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.676515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.676534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.681313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.681620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.717 [2024-07-16 01:32:02.681639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.717 [2024-07-16 01:32:02.686560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.717 [2024-07-16 01:32:02.686871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.718 [2024-07-16 01:32:02.686890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.718 [2024-07-16 01:32:02.691759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.718 [2024-07-16 01:32:02.692071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.718 [2024-07-16 01:32:02.692089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.718 [2024-07-16 01:32:02.696949] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.718 [2024-07-16 01:32:02.697250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.718 [2024-07-16 01:32:02.697269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.702546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.702870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.702893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.707737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.708028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.708050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.712511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.712789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.712809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.717563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.717853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.717873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.722530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.722825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.722844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.727466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.727761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.727780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.731842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.732115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.732133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.736496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.736762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.736781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.741362] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.741630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.741649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.746696] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.746957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.746976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.751567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.751825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.751844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.756023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.756290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.756308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.760125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.760389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.760407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.977 [2024-07-16 01:32:02.763963] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.977 [2024-07-16 01:32:02.764225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.977 [2024-07-16 01:32:02.764244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.767795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.768058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.768077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.771576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.771845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.771867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.775451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.775709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.775727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.779654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.779916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.779934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.784375] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.784638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.784657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.789256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.789517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.789536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.793421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.793687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.793705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.798019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.798289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.798307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.802210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.802478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.802497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.806322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.806591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.806610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.810501] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.810774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.810793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.814409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.814676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.814694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.818440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.818704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.818723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.822927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.823196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.823214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.828185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.828446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.828465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.832394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.832660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.832679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.836853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.837130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.837150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.841224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.841489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.841507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.845434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.845710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.845728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.849540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.849809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.849828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.853693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.853949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.853968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.857970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.858227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.858245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.862058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.862328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.862352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.866213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.866484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.866503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.870221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.870479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.870497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.874317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.874593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.874612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.878570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.878839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.878858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.882806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.883067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.883090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.978 [2024-07-16 01:32:02.886956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.978 [2024-07-16 01:32:02.887228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.978 [2024-07-16 01:32:02.887248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.891216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.891489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.891508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.895319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.895587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.895606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.899608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.899941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.899960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.904090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.904355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.904374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.908317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.908590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.908609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.912633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.912889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.912907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.916687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.916957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.916976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.920770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.921037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.921055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.924866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.925121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.925140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.928908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.929173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.929193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.933355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.933616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.933634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.937523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.937789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.937808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.941539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.941803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.941822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.945625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.945891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.945909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.950128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.950551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.950570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.955218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.955486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.955509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.979 [2024-07-16 01:32:02.959588] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:36.979 [2024-07-16 01:32:02.959852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.979 [2024-07-16 01:32:02.959874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:02.963997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:02.964271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:02.964295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:02.968185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:02.968456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:02.968476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:02.972415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:02.972686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:02.972707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:02.976565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:02.976838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:02.976858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:02.980845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:02.981114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:02.981133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:02.984982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:02.985239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:02.985258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:02.989071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:02.989346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:02.989367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:02.994020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:02.994285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:02.994304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:02.997825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:02.998095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:02.998115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:03.001571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:03.001842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:03.001860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:03.005283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:03.005544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:03.005563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:03.009376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:03.009660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:03.009679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:03.014553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:03.014907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.237 [2024-07-16 01:32:03.014926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.237 [2024-07-16 01:32:03.019443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.237 [2024-07-16 01:32:03.019719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.019738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.023809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.024062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.024081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.027858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.028126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.028144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.031543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.031810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.031829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.035239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.035493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.035512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.038939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.039182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.039202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.042763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.043035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.043054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.046545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.046798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.046817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.050245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.050506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.050524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.053959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.054221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.054240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.057663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.057913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.057932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.061653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.061891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.061912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.066937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.067177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.067196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.071668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.071923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.071942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.076010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.076247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.076266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.080182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.080430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.080448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.084355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.084600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.084619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.088605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.088853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.088872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.092704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.092936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.092954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.096861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.097127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.097146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.100972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.101202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.101221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.105179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.105452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.105470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.109324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.109567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.109585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.113199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.113466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.113485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.116996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.117237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.117255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.120785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.121028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.121047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.124587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.124837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.124856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.128352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.128600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.128619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.132072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.132314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.132333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.136046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.136300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.136319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.139905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.140148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.140167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.144603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.144843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.144862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.149794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.150040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.150059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.154068] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.154301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.154320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.158190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.158449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.158468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.162361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.162600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.162618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.238 [2024-07-16 01:32:03.166601] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.238 [2024-07-16 01:32:03.166846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.238 [2024-07-16 01:32:03.166864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.170682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.170930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.170952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.174762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.174996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.175015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.179019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.179267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.179286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.183234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.183492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.183511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.187433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.187703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.187722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.191890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.192148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.192166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.195887] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.196142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.196161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.199604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.199865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.199883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.203346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.203613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.203632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.207193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.207458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.207477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.210925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.211155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.211173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.214621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.214872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.214890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.218622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.218860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.218879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.239 [2024-07-16 01:32:03.223069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.239 [2024-07-16 01:32:03.223308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.239 [2024-07-16 01:32:03.223331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.496 [2024-07-16 01:32:03.228082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.496 [2024-07-16 01:32:03.228333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.496 [2024-07-16 01:32:03.228362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.496 [2024-07-16 01:32:03.232279] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.496 [2024-07-16 01:32:03.232541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.496 [2024-07-16 01:32:03.232562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.496 [2024-07-16 01:32:03.236440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.496 [2024-07-16 01:32:03.236672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.496 [2024-07-16 01:32:03.236691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.496 [2024-07-16 01:32:03.240542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.496 [2024-07-16 01:32:03.240799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.496 [2024-07-16 01:32:03.240819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.496 [2024-07-16 01:32:03.244549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.496 [2024-07-16 01:32:03.244818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.496 [2024-07-16 01:32:03.244838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.496 [2024-07-16 01:32:03.248306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.496 [2024-07-16 01:32:03.248562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.496 [2024-07-16 01:32:03.248582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.496 [2024-07-16 01:32:03.252114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.496 [2024-07-16 01:32:03.252376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.496 [2024-07-16 01:32:03.252413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.496 [2024-07-16 01:32:03.255960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.496 [2024-07-16 01:32:03.256189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.496 [2024-07-16 01:32:03.256208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.496 [2024-07-16 01:32:03.259704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.496 [2024-07-16 01:32:03.259965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.496 [2024-07-16 01:32:03.259984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.496 [2024-07-16 01:32:03.263432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.496 [2024-07-16 01:32:03.263701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.496 [2024-07-16 01:32:03.263720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.496 [2024-07-16 01:32:03.267127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.267368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.267386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.497 [2024-07-16 01:32:03.270854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.271114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.271132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.497 [2024-07-16 01:32:03.274694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.274943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.274964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.497 [2024-07-16 01:32:03.278405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.278663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.278681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.497 [2024-07-16 01:32:03.282066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.282309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.282328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.497 [2024-07-16 01:32:03.285746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.285979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.285998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.497 [2024-07-16 01:32:03.289427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.289683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.289704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.497 [2024-07-16 01:32:03.293127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.293374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.293393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.497 [2024-07-16 01:32:03.296816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.297045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.297063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.497 [2024-07-16 01:32:03.300511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.300761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.300780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.497 [2024-07-16 01:32:03.304204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.304468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.304486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.497 [2024-07-16 01:32:03.307859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a4800) with pdu=0x2000190fef90 00:26:37.497 [2024-07-16 01:32:03.308106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.497 [2024-07-16 01:32:03.308124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.497 00:26:37.497 Latency(us) 00:26:37.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.497 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:37.497 nvme0n1 : 2.00 6071.34 758.92 0.00 0.00 2631.68 1654.00 10423.34 00:26:37.497 =================================================================================================================== 00:26:37.497 Total : 6071.34 758.92 0.00 0.00 2631.68 1654.00 10423.34 00:26:37.497 0 00:26:37.497 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:37.497 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:37.497 | .driver_specific 00:26:37.497 | .nvme_error 00:26:37.497 | .status_code 00:26:37.497 | .command_transient_transport_error' 00:26:37.497 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:37.497 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 391 > 0 )) 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3533339 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3533339 ']' 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3533339 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3533339 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3533339' 00:26:37.754 killing process with pid 3533339 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3533339 00:26:37.754 Received shutdown signal, test time was about 2.000000 seconds 00:26:37.754 00:26:37.754 Latency(us) 00:26:37.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.754 =================================================================================================================== 00:26:37.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3533339 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3531220 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3531220 ']' 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3531220 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:37.754 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3531220 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3531220' 00:26:38.011 killing process with pid 3531220 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3531220 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3531220 00:26:38.011 00:26:38.011 real 0m16.851s 00:26:38.011 user 0m32.092s 00:26:38.011 sys 0m4.599s 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.011 ************************************ 00:26:38.011 END TEST nvmf_digest_error 00:26:38.011 ************************************ 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:38.011 01:32:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:38.269 rmmod nvme_tcp 00:26:38.269 rmmod nvme_fabrics 00:26:38.269 rmmod nvme_keyring 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3531220 ']' 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3531220 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3531220 ']' 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3531220 00:26:38.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3531220) - No such process 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3531220 is not found' 00:26:38.269 Process with pid 3531220 is not found 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.269 01:32:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.168 01:32:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:40.168 00:26:40.168 real 0m41.358s 00:26:40.168 user 1m5.713s 00:26:40.168 sys 0m13.234s 00:26:40.168 01:32:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:40.168 01:32:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.168 ************************************ 00:26:40.168 END TEST nvmf_digest 00:26:40.168 ************************************ 00:26:40.168 01:32:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:40.168 01:32:06 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:40.168 01:32:06 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:40.168 01:32:06 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:40.168 01:32:06 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:40.168 01:32:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:40.168 01:32:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.168 01:32:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:40.426 ************************************ 00:26:40.426 START TEST nvmf_bdevperf 00:26:40.426 ************************************ 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:40.426 * Looking for test storage... 00:26:40.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.426 01:32:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:45.683 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:45.683 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:45.683 Found net devices under 0000:86:00.0: cvl_0_0 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:45.683 Found net devices under 0000:86:00.1: cvl_0_1 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.683 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.940 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:45.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:26:45.940 00:26:45.940 --- 10.0.0.2 ping statistics --- 00:26:45.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.940 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:26:45.940 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:26:45.940 00:26:45.940 --- 10.0.0.1 ping statistics --- 00:26:45.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.940 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:26:45.940 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.940 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:45.940 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:45.940 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3537342 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3537342 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3537342 ']' 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:45.941 01:32:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:45.941 [2024-07-16 01:32:11.759426] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:45.941 [2024-07-16 01:32:11.759470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.941 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.941 [2024-07-16 01:32:11.817786] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:45.941 [2024-07-16 01:32:11.894718] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.941 [2024-07-16 01:32:11.894760] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.941 [2024-07-16 01:32:11.894767] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:45.941 [2024-07-16 01:32:11.894773] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:45.941 [2024-07-16 01:32:11.894778] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.941 [2024-07-16 01:32:11.894881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.941 [2024-07-16 01:32:11.898352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.941 [2024-07-16 01:32:11.898356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.873 [2024-07-16 01:32:12.598343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.873 Malloc0 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.873 [2024-07-16 01:32:12.654905] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:46.873 { 00:26:46.873 "params": { 00:26:46.873 "name": "Nvme$subsystem", 00:26:46.873 "trtype": "$TEST_TRANSPORT", 00:26:46.873 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.873 "adrfam": "ipv4", 00:26:46.873 "trsvcid": "$NVMF_PORT", 00:26:46.873 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.873 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.873 "hdgst": ${hdgst:-false}, 00:26:46.873 "ddgst": ${ddgst:-false} 00:26:46.873 }, 00:26:46.873 "method": "bdev_nvme_attach_controller" 00:26:46.873 } 00:26:46.873 EOF 00:26:46.873 )") 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:46.873 01:32:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:46.873 "params": { 00:26:46.873 "name": "Nvme1", 00:26:46.873 "trtype": "tcp", 00:26:46.873 "traddr": "10.0.0.2", 00:26:46.873 "adrfam": "ipv4", 00:26:46.873 "trsvcid": "4420", 00:26:46.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:46.873 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:46.873 "hdgst": false, 00:26:46.873 "ddgst": false 00:26:46.873 }, 00:26:46.873 "method": "bdev_nvme_attach_controller" 00:26:46.873 }' 00:26:46.873 [2024-07-16 01:32:12.706877] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:46.873 [2024-07-16 01:32:12.706920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3537588 ] 00:26:46.873 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.873 [2024-07-16 01:32:12.763258] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.873 [2024-07-16 01:32:12.836029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.132 Running I/O for 1 seconds... 00:26:48.065 00:26:48.065 Latency(us) 00:26:48.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.065 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:48.065 Verification LBA range: start 0x0 length 0x4000 00:26:48.065 Nvme1n1 : 1.01 11068.48 43.24 0.00 0.00 11521.80 2106.51 15229.32 00:26:48.065 =================================================================================================================== 00:26:48.065 Total : 11068.48 43.24 0.00 0.00 11521.80 2106.51 15229.32 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3537822 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:48.323 { 00:26:48.323 "params": { 00:26:48.323 "name": "Nvme$subsystem", 00:26:48.323 "trtype": "$TEST_TRANSPORT", 00:26:48.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:48.323 "adrfam": "ipv4", 00:26:48.323 "trsvcid": "$NVMF_PORT", 00:26:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:48.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:48.323 "hdgst": ${hdgst:-false}, 00:26:48.323 "ddgst": ${ddgst:-false} 00:26:48.323 }, 00:26:48.323 "method": "bdev_nvme_attach_controller" 00:26:48.323 } 00:26:48.323 EOF 00:26:48.323 )") 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:48.323 01:32:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:48.323 "params": { 00:26:48.323 "name": "Nvme1", 00:26:48.323 "trtype": "tcp", 00:26:48.323 "traddr": "10.0.0.2", 00:26:48.323 "adrfam": "ipv4", 00:26:48.323 "trsvcid": "4420", 00:26:48.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:48.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:48.323 "hdgst": false, 00:26:48.323 "ddgst": false 00:26:48.323 }, 00:26:48.323 "method": "bdev_nvme_attach_controller" 00:26:48.323 }' 00:26:48.323 [2024-07-16 01:32:14.236299] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:48.323 [2024-07-16 01:32:14.236350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3537822 ] 00:26:48.323 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.323 [2024-07-16 01:32:14.292518] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.583 [2024-07-16 01:32:14.361758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.852 Running I/O for 15 seconds... 00:26:51.395 01:32:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3537342 00:26:51.395 01:32:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:51.395 [2024-07-16 01:32:17.212429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.395 [2024-07-16 01:32:17.212842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.395 [2024-07-16 01:32:17.212848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.212861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.212869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.212877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.212884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.212894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.212900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.212910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.212917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.212928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.212936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.212945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.212953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.212961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.212970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.212980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.212988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.396 [2024-07-16 01:32:17.213584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.396 [2024-07-16 01:32:17.213591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.213960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.397 [2024-07-16 01:32:17.213978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.213986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.397 [2024-07-16 01:32:17.213992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.397 [2024-07-16 01:32:17.214006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.397 [2024-07-16 01:32:17.214021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.397 [2024-07-16 01:32:17.214035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.397 [2024-07-16 01:32:17.214049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.397 [2024-07-16 01:32:17.214065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.214080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.214094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.214109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.214124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.214138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.214152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.214168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.214183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.214197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.397 [2024-07-16 01:32:17.214205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.397 [2024-07-16 01:32:17.214211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.398 [2024-07-16 01:32:17.214226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.398 [2024-07-16 01:32:17.214240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.398 [2024-07-16 01:32:17.214254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.398 [2024-07-16 01:32:17.214478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.398 [2024-07-16 01:32:17.214493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc90200 is same with the state(5) to be set 00:26:51.398 [2024-07-16 01:32:17.214508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.398 [2024-07-16 01:32:17.214513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.398 [2024-07-16 01:32:17.214519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99464 len:8 PRP1 0x0 PRP2 0x0 00:26:51.398 [2024-07-16 01:32:17.214526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.398 [2024-07-16 01:32:17.214568] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc90200 was disconnected and freed. reset controller. 00:26:51.398 [2024-07-16 01:32:17.217316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.398 [2024-07-16 01:32:17.217374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.398 [2024-07-16 01:32:17.218133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.398 [2024-07-16 01:32:17.218152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.398 [2024-07-16 01:32:17.218163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.398 [2024-07-16 01:32:17.218342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.398 [2024-07-16 01:32:17.218516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.398 [2024-07-16 01:32:17.218525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.398 [2024-07-16 01:32:17.218533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.398 [2024-07-16 01:32:17.221264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.398 [2024-07-16 01:32:17.230468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.398 [2024-07-16 01:32:17.230904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.398 [2024-07-16 01:32:17.230961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.398 [2024-07-16 01:32:17.230983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.398 [2024-07-16 01:32:17.231544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.398 [2024-07-16 01:32:17.231713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.398 [2024-07-16 01:32:17.231722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.398 [2024-07-16 01:32:17.231729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.398 [2024-07-16 01:32:17.234364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.398 [2024-07-16 01:32:17.243243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.398 [2024-07-16 01:32:17.243594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.398 [2024-07-16 01:32:17.243610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.398 [2024-07-16 01:32:17.243617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.398 [2024-07-16 01:32:17.243775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.398 [2024-07-16 01:32:17.243934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.398 [2024-07-16 01:32:17.243943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.398 [2024-07-16 01:32:17.243949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.398 [2024-07-16 01:32:17.246564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.398 [2024-07-16 01:32:17.256022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.398 [2024-07-16 01:32:17.256461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.398 [2024-07-16 01:32:17.256511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.398 [2024-07-16 01:32:17.256533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.398 [2024-07-16 01:32:17.257110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.398 [2024-07-16 01:32:17.257708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.398 [2024-07-16 01:32:17.257723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.398 [2024-07-16 01:32:17.257734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.398 [2024-07-16 01:32:17.262181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.398 [2024-07-16 01:32:17.269763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.398 [2024-07-16 01:32:17.270228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.398 [2024-07-16 01:32:17.270246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.398 [2024-07-16 01:32:17.270254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.398 [2024-07-16 01:32:17.270447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.398 [2024-07-16 01:32:17.270632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.398 [2024-07-16 01:32:17.270641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.398 [2024-07-16 01:32:17.270649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.398 [2024-07-16 01:32:17.273561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.398 [2024-07-16 01:32:17.282600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.398 [2024-07-16 01:32:17.283016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.398 [2024-07-16 01:32:17.283032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.399 [2024-07-16 01:32:17.283040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.399 [2024-07-16 01:32:17.283198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.399 [2024-07-16 01:32:17.283364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.399 [2024-07-16 01:32:17.283373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.399 [2024-07-16 01:32:17.283380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.399 [2024-07-16 01:32:17.285994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.399 [2024-07-16 01:32:17.295310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.399 [2024-07-16 01:32:17.295710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.399 [2024-07-16 01:32:17.295726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.399 [2024-07-16 01:32:17.295733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.399 [2024-07-16 01:32:17.295891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.399 [2024-07-16 01:32:17.296052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.399 [2024-07-16 01:32:17.296061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.399 [2024-07-16 01:32:17.296067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.399 [2024-07-16 01:32:17.298678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.399 [2024-07-16 01:32:17.308143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.399 [2024-07-16 01:32:17.308562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.399 [2024-07-16 01:32:17.308578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.399 [2024-07-16 01:32:17.308584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.399 [2024-07-16 01:32:17.308743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.399 [2024-07-16 01:32:17.308901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.399 [2024-07-16 01:32:17.308909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.399 [2024-07-16 01:32:17.308915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.399 [2024-07-16 01:32:17.311530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.399 [2024-07-16 01:32:17.320999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.399 [2024-07-16 01:32:17.321432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.399 [2024-07-16 01:32:17.321476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.399 [2024-07-16 01:32:17.321499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.399 [2024-07-16 01:32:17.321915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.399 [2024-07-16 01:32:17.322075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.399 [2024-07-16 01:32:17.322084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.399 [2024-07-16 01:32:17.322089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.399 [2024-07-16 01:32:17.324700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.399 [2024-07-16 01:32:17.333717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.399 [2024-07-16 01:32:17.334131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.399 [2024-07-16 01:32:17.334147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.399 [2024-07-16 01:32:17.334154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.399 [2024-07-16 01:32:17.334314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.399 [2024-07-16 01:32:17.334501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.399 [2024-07-16 01:32:17.334511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.399 [2024-07-16 01:32:17.334517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.399 [2024-07-16 01:32:17.337109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.399 [2024-07-16 01:32:17.346429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.399 [2024-07-16 01:32:17.346817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.399 [2024-07-16 01:32:17.346833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.399 [2024-07-16 01:32:17.346840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.399 [2024-07-16 01:32:17.346998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.399 [2024-07-16 01:32:17.347158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.399 [2024-07-16 01:32:17.347166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.399 [2024-07-16 01:32:17.347173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.399 [2024-07-16 01:32:17.349787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.399 [2024-07-16 01:32:17.359254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.399 [2024-07-16 01:32:17.359704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.399 [2024-07-16 01:32:17.359721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.399 [2024-07-16 01:32:17.359728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.399 [2024-07-16 01:32:17.359887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.399 [2024-07-16 01:32:17.360045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.399 [2024-07-16 01:32:17.360054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.399 [2024-07-16 01:32:17.360060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.399 [2024-07-16 01:32:17.362679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.399 [2024-07-16 01:32:17.372100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.399 [2024-07-16 01:32:17.372526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.399 [2024-07-16 01:32:17.372542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.399 [2024-07-16 01:32:17.372549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.399 [2024-07-16 01:32:17.372707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.399 [2024-07-16 01:32:17.372867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.399 [2024-07-16 01:32:17.372875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.399 [2024-07-16 01:32:17.372881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.399 [2024-07-16 01:32:17.375499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.658 [2024-07-16 01:32:17.385057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.658 [2024-07-16 01:32:17.385476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.658 [2024-07-16 01:32:17.385492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.658 [2024-07-16 01:32:17.385502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.658 [2024-07-16 01:32:17.385662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.658 [2024-07-16 01:32:17.385820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.658 [2024-07-16 01:32:17.385829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.658 [2024-07-16 01:32:17.385835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.658 [2024-07-16 01:32:17.388511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.658 [2024-07-16 01:32:17.397931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.658 [2024-07-16 01:32:17.398345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.658 [2024-07-16 01:32:17.398391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.658 [2024-07-16 01:32:17.398413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.658 [2024-07-16 01:32:17.398926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.658 [2024-07-16 01:32:17.399128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.658 [2024-07-16 01:32:17.399141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.658 [2024-07-16 01:32:17.399151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.658 [2024-07-16 01:32:17.403594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.658 [2024-07-16 01:32:17.411665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.658 [2024-07-16 01:32:17.412110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.658 [2024-07-16 01:32:17.412163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.658 [2024-07-16 01:32:17.412184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.658 [2024-07-16 01:32:17.412779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.658 [2024-07-16 01:32:17.413287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.658 [2024-07-16 01:32:17.413297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.658 [2024-07-16 01:32:17.413304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.658 [2024-07-16 01:32:17.416222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.658 [2024-07-16 01:32:17.424506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.658 [2024-07-16 01:32:17.424908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.658 [2024-07-16 01:32:17.424924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.659 [2024-07-16 01:32:17.424930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.659 [2024-07-16 01:32:17.425089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.659 [2024-07-16 01:32:17.425247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.659 [2024-07-16 01:32:17.425258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.659 [2024-07-16 01:32:17.425265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.659 [2024-07-16 01:32:17.427881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.659 [2024-07-16 01:32:17.437342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.659 [2024-07-16 01:32:17.437754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.659 [2024-07-16 01:32:17.437770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.659 [2024-07-16 01:32:17.437777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.659 [2024-07-16 01:32:17.437935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.659 [2024-07-16 01:32:17.438095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.659 [2024-07-16 01:32:17.438103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.659 [2024-07-16 01:32:17.438109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.659 [2024-07-16 01:32:17.440641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.659 [2024-07-16 01:32:17.450158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.659 [2024-07-16 01:32:17.450554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.659 [2024-07-16 01:32:17.450570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.659 [2024-07-16 01:32:17.450576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.659 [2024-07-16 01:32:17.450735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.659 [2024-07-16 01:32:17.450893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.659 [2024-07-16 01:32:17.450902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.659 [2024-07-16 01:32:17.450908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.659 [2024-07-16 01:32:17.453570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.659 [2024-07-16 01:32:17.462905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.659 [2024-07-16 01:32:17.463282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.659 [2024-07-16 01:32:17.463325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.659 [2024-07-16 01:32:17.463363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.659 [2024-07-16 01:32:17.463941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.659 [2024-07-16 01:32:17.464365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.659 [2024-07-16 01:32:17.464374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.659 [2024-07-16 01:32:17.464380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.659 [2024-07-16 01:32:17.467099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.659 [2024-07-16 01:32:17.475934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.659 [2024-07-16 01:32:17.476359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.659 [2024-07-16 01:32:17.476400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.659 [2024-07-16 01:32:17.476423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.659 [2024-07-16 01:32:17.477005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.659 [2024-07-16 01:32:17.477179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.659 [2024-07-16 01:32:17.477189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.659 [2024-07-16 01:32:17.477195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.659 [2024-07-16 01:32:17.479944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.659 [2024-07-16 01:32:17.488884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.659 [2024-07-16 01:32:17.489277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.659 [2024-07-16 01:32:17.489293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.659 [2024-07-16 01:32:17.489300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.659 [2024-07-16 01:32:17.489464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.659 [2024-07-16 01:32:17.489623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.659 [2024-07-16 01:32:17.489632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.659 [2024-07-16 01:32:17.489638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.659 [2024-07-16 01:32:17.492222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.659 [2024-07-16 01:32:17.501799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.659 [2024-07-16 01:32:17.502193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.659 [2024-07-16 01:32:17.502208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.659 [2024-07-16 01:32:17.502215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.659 [2024-07-16 01:32:17.502379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.659 [2024-07-16 01:32:17.502539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.659 [2024-07-16 01:32:17.502548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.659 [2024-07-16 01:32:17.502554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.659 [2024-07-16 01:32:17.505186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.659 [2024-07-16 01:32:17.514634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.659 [2024-07-16 01:32:17.514952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.659 [2024-07-16 01:32:17.514967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.659 [2024-07-16 01:32:17.514974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.659 [2024-07-16 01:32:17.515135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.659 [2024-07-16 01:32:17.515294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.659 [2024-07-16 01:32:17.515303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.659 [2024-07-16 01:32:17.515308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.659 [2024-07-16 01:32:17.517920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.659 [2024-07-16 01:32:17.527389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.659 [2024-07-16 01:32:17.527804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.659 [2024-07-16 01:32:17.527820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.659 [2024-07-16 01:32:17.527827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.659 [2024-07-16 01:32:17.527985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.659 [2024-07-16 01:32:17.528143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.659 [2024-07-16 01:32:17.528151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.659 [2024-07-16 01:32:17.528158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.659 [2024-07-16 01:32:17.530773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.659 [2024-07-16 01:32:17.540236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.659 [2024-07-16 01:32:17.540585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.659 [2024-07-16 01:32:17.540600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.659 [2024-07-16 01:32:17.540607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.659 [2024-07-16 01:32:17.540765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.659 [2024-07-16 01:32:17.540924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.659 [2024-07-16 01:32:17.540933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.659 [2024-07-16 01:32:17.540939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.659 [2024-07-16 01:32:17.543554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.659 [2024-07-16 01:32:17.553008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.659 [2024-07-16 01:32:17.553401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.659 [2024-07-16 01:32:17.553440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.659 [2024-07-16 01:32:17.553463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.659 [2024-07-16 01:32:17.554042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.659 [2024-07-16 01:32:17.554238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.659 [2024-07-16 01:32:17.554245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.659 [2024-07-16 01:32:17.554254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.659 [2024-07-16 01:32:17.556866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.659 [2024-07-16 01:32:17.565734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.659 [2024-07-16 01:32:17.566070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.659 [2024-07-16 01:32:17.566086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.660 [2024-07-16 01:32:17.566093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.660 [2024-07-16 01:32:17.566250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.660 [2024-07-16 01:32:17.566415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.660 [2024-07-16 01:32:17.566424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.660 [2024-07-16 01:32:17.566430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.660 [2024-07-16 01:32:17.568950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.660 [2024-07-16 01:32:17.578467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.660 [2024-07-16 01:32:17.578883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.660 [2024-07-16 01:32:17.578928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.660 [2024-07-16 01:32:17.578950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.660 [2024-07-16 01:32:17.579509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.660 [2024-07-16 01:32:17.579668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.660 [2024-07-16 01:32:17.579676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.660 [2024-07-16 01:32:17.579682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.660 [2024-07-16 01:32:17.582205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.660 [2024-07-16 01:32:17.591217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.660 [2024-07-16 01:32:17.591657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.660 [2024-07-16 01:32:17.591673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.660 [2024-07-16 01:32:17.591681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.660 [2024-07-16 01:32:17.591846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.660 [2024-07-16 01:32:17.592014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.660 [2024-07-16 01:32:17.592023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.660 [2024-07-16 01:32:17.592029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.660 [2024-07-16 01:32:17.594651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.660 [2024-07-16 01:32:17.604060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.660 [2024-07-16 01:32:17.604457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.660 [2024-07-16 01:32:17.604477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.660 [2024-07-16 01:32:17.604485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.660 [2024-07-16 01:32:17.604643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.660 [2024-07-16 01:32:17.604803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.660 [2024-07-16 01:32:17.604813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.660 [2024-07-16 01:32:17.604819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.660 [2024-07-16 01:32:17.607417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.660 [2024-07-16 01:32:17.616997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.660 [2024-07-16 01:32:17.617414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.660 [2024-07-16 01:32:17.617431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.660 [2024-07-16 01:32:17.617437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.660 [2024-07-16 01:32:17.617596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.660 [2024-07-16 01:32:17.617755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.660 [2024-07-16 01:32:17.617763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.660 [2024-07-16 01:32:17.617770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.660 [2024-07-16 01:32:17.620385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.660 [2024-07-16 01:32:17.629784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.660 [2024-07-16 01:32:17.630199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.660 [2024-07-16 01:32:17.630214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.660 [2024-07-16 01:32:17.630221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.660 [2024-07-16 01:32:17.630402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.660 [2024-07-16 01:32:17.630571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.660 [2024-07-16 01:32:17.630581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.660 [2024-07-16 01:32:17.630587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.660 [2024-07-16 01:32:17.633228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.660 [2024-07-16 01:32:17.642846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.660 [2024-07-16 01:32:17.643218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.660 [2024-07-16 01:32:17.643260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.660 [2024-07-16 01:32:17.643283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.660 [2024-07-16 01:32:17.643783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.660 [2024-07-16 01:32:17.643959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.660 [2024-07-16 01:32:17.643967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.660 [2024-07-16 01:32:17.643973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.919 [2024-07-16 01:32:17.646747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.919 [2024-07-16 01:32:17.655830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.919 [2024-07-16 01:32:17.656249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-16 01:32:17.656265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.919 [2024-07-16 01:32:17.656272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.919 [2024-07-16 01:32:17.656445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.919 [2024-07-16 01:32:17.656623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.919 [2024-07-16 01:32:17.656632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.919 [2024-07-16 01:32:17.656638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.919 [2024-07-16 01:32:17.659163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.919 [2024-07-16 01:32:17.668581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.919 [2024-07-16 01:32:17.668927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.919 [2024-07-16 01:32:17.668943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.919 [2024-07-16 01:32:17.668950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.919 [2024-07-16 01:32:17.669108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.919 [2024-07-16 01:32:17.669266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.919 [2024-07-16 01:32:17.669275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.920 [2024-07-16 01:32:17.669281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.920 [2024-07-16 01:32:17.671894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.920 [2024-07-16 01:32:17.681354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.920 [2024-07-16 01:32:17.681778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-16 01:32:17.681794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.920 [2024-07-16 01:32:17.681801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.920 [2024-07-16 01:32:17.681959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.920 [2024-07-16 01:32:17.682118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.920 [2024-07-16 01:32:17.682127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.920 [2024-07-16 01:32:17.682133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.920 [2024-07-16 01:32:17.684751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.920 [2024-07-16 01:32:17.694066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.920 [2024-07-16 01:32:17.694487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-16 01:32:17.694543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.920 [2024-07-16 01:32:17.694565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.920 [2024-07-16 01:32:17.695143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.920 [2024-07-16 01:32:17.695706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.920 [2024-07-16 01:32:17.695717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.920 [2024-07-16 01:32:17.695723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.920 [2024-07-16 01:32:17.698297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.920 [2024-07-16 01:32:17.706867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.920 [2024-07-16 01:32:17.707280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-16 01:32:17.707323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.920 [2024-07-16 01:32:17.707360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.920 [2024-07-16 01:32:17.707785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.920 [2024-07-16 01:32:17.707954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.920 [2024-07-16 01:32:17.707961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.920 [2024-07-16 01:32:17.707968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.920 [2024-07-16 01:32:17.710534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.920 [2024-07-16 01:32:17.719671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.920 [2024-07-16 01:32:17.720099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-16 01:32:17.720116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.920 [2024-07-16 01:32:17.720123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.920 [2024-07-16 01:32:17.720290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.920 [2024-07-16 01:32:17.720465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.920 [2024-07-16 01:32:17.720475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.920 [2024-07-16 01:32:17.720481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.920 [2024-07-16 01:32:17.723197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.920 [2024-07-16 01:32:17.732650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.920 [2024-07-16 01:32:17.733054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-16 01:32:17.733070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.920 [2024-07-16 01:32:17.733081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.920 [2024-07-16 01:32:17.733240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.920 [2024-07-16 01:32:17.733405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.920 [2024-07-16 01:32:17.733414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.920 [2024-07-16 01:32:17.733421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.920 [2024-07-16 01:32:17.735993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.920 [2024-07-16 01:32:17.745725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.920 [2024-07-16 01:32:17.746142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-16 01:32:17.746187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.920 [2024-07-16 01:32:17.746210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.920 [2024-07-16 01:32:17.746755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.920 [2024-07-16 01:32:17.746916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.920 [2024-07-16 01:32:17.746925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.920 [2024-07-16 01:32:17.746931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.920 [2024-07-16 01:32:17.749530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.920 [2024-07-16 01:32:17.758565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.920 [2024-07-16 01:32:17.758890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-16 01:32:17.758907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.920 [2024-07-16 01:32:17.758914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.920 [2024-07-16 01:32:17.759081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.920 [2024-07-16 01:32:17.759248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.920 [2024-07-16 01:32:17.759257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.920 [2024-07-16 01:32:17.759264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.920 [2024-07-16 01:32:17.761821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.920 [2024-07-16 01:32:17.771285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.920 [2024-07-16 01:32:17.771705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-16 01:32:17.771748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.920 [2024-07-16 01:32:17.771769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.920 [2024-07-16 01:32:17.772361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.920 [2024-07-16 01:32:17.772845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.920 [2024-07-16 01:32:17.772864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.920 [2024-07-16 01:32:17.772875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.920 [2024-07-16 01:32:17.777321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.920 [2024-07-16 01:32:17.785052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.920 [2024-07-16 01:32:17.785492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-16 01:32:17.785536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.920 [2024-07-16 01:32:17.785559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.920 [2024-07-16 01:32:17.785841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.920 [2024-07-16 01:32:17.786027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.920 [2024-07-16 01:32:17.786037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.920 [2024-07-16 01:32:17.786044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.920 [2024-07-16 01:32:17.788961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.920 [2024-07-16 01:32:17.797962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.920 [2024-07-16 01:32:17.798301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-16 01:32:17.798317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.920 [2024-07-16 01:32:17.798324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.920 [2024-07-16 01:32:17.798489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.920 [2024-07-16 01:32:17.798648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.920 [2024-07-16 01:32:17.798656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.920 [2024-07-16 01:32:17.798663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.920 [2024-07-16 01:32:17.801305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.920 [2024-07-16 01:32:17.810797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.920 [2024-07-16 01:32:17.811146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.920 [2024-07-16 01:32:17.811163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.920 [2024-07-16 01:32:17.811170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.920 [2024-07-16 01:32:17.811343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.920 [2024-07-16 01:32:17.811511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.920 [2024-07-16 01:32:17.811531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.921 [2024-07-16 01:32:17.811537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.921 [2024-07-16 01:32:17.814065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.921 [2024-07-16 01:32:17.823597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.921 [2024-07-16 01:32:17.823868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-16 01:32:17.823883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.921 [2024-07-16 01:32:17.823891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.921 [2024-07-16 01:32:17.824048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.921 [2024-07-16 01:32:17.824207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.921 [2024-07-16 01:32:17.824216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.921 [2024-07-16 01:32:17.824222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.921 [2024-07-16 01:32:17.826844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.921 [2024-07-16 01:32:17.836469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.921 [2024-07-16 01:32:17.836798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-16 01:32:17.836814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.921 [2024-07-16 01:32:17.836821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.921 [2024-07-16 01:32:17.836979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.921 [2024-07-16 01:32:17.837137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.921 [2024-07-16 01:32:17.837146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.921 [2024-07-16 01:32:17.837152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.921 [2024-07-16 01:32:17.839821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.921 [2024-07-16 01:32:17.849326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.921 [2024-07-16 01:32:17.849715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-16 01:32:17.849731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.921 [2024-07-16 01:32:17.849738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.921 [2024-07-16 01:32:17.849896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.921 [2024-07-16 01:32:17.850054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.921 [2024-07-16 01:32:17.850063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.921 [2024-07-16 01:32:17.850069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.921 [2024-07-16 01:32:17.852670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.921 [2024-07-16 01:32:17.862183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.921 [2024-07-16 01:32:17.862578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-16 01:32:17.862594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.921 [2024-07-16 01:32:17.862601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.921 [2024-07-16 01:32:17.862764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.921 [2024-07-16 01:32:17.862923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.921 [2024-07-16 01:32:17.862932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.921 [2024-07-16 01:32:17.862937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.921 [2024-07-16 01:32:17.865550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.921 [2024-07-16 01:32:17.875162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.921 [2024-07-16 01:32:17.875562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-16 01:32:17.875578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.921 [2024-07-16 01:32:17.875585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.921 [2024-07-16 01:32:17.875752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.921 [2024-07-16 01:32:17.875921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.921 [2024-07-16 01:32:17.875930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.921 [2024-07-16 01:32:17.875937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.921 [2024-07-16 01:32:17.878609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.921 [2024-07-16 01:32:17.888063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.921 [2024-07-16 01:32:17.888460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-16 01:32:17.888477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.921 [2024-07-16 01:32:17.888485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.921 [2024-07-16 01:32:17.888656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.921 [2024-07-16 01:32:17.888816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.921 [2024-07-16 01:32:17.888825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.921 [2024-07-16 01:32:17.888831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.921 [2024-07-16 01:32:17.891356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.921 [2024-07-16 01:32:17.900848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:51.921 [2024-07-16 01:32:17.901195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.921 [2024-07-16 01:32:17.901211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:51.921 [2024-07-16 01:32:17.901217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:51.921 [2024-07-16 01:32:17.901396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:51.921 [2024-07-16 01:32:17.901583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:51.921 [2024-07-16 01:32:17.901593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:51.921 [2024-07-16 01:32:17.901603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:51.921 [2024-07-16 01:32:17.904391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.203 [2024-07-16 01:32:17.913761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.203 [2024-07-16 01:32:17.914210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.203 [2024-07-16 01:32:17.914255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.203 [2024-07-16 01:32:17.914277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.203 [2024-07-16 01:32:17.914870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.203 [2024-07-16 01:32:17.915451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.203 [2024-07-16 01:32:17.915465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.203 [2024-07-16 01:32:17.915475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.203 [2024-07-16 01:32:17.919933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.203 [2024-07-16 01:32:17.927478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.203 [2024-07-16 01:32:17.927837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.203 [2024-07-16 01:32:17.927879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.203 [2024-07-16 01:32:17.927902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.203 [2024-07-16 01:32:17.928379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.203 [2024-07-16 01:32:17.928563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.203 [2024-07-16 01:32:17.928573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.203 [2024-07-16 01:32:17.928580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.203 [2024-07-16 01:32:17.931502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.203 [2024-07-16 01:32:17.940292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.203 [2024-07-16 01:32:17.940675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.204 [2024-07-16 01:32:17.940692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.204 [2024-07-16 01:32:17.940699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.204 [2024-07-16 01:32:17.940857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.204 [2024-07-16 01:32:17.941015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.204 [2024-07-16 01:32:17.941024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.204 [2024-07-16 01:32:17.941031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.204 [2024-07-16 01:32:17.943654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.204 [2024-07-16 01:32:17.953201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.204 [2024-07-16 01:32:17.953546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.204 [2024-07-16 01:32:17.953566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.204 [2024-07-16 01:32:17.953573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.204 [2024-07-16 01:32:17.953742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.204 [2024-07-16 01:32:17.953901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.204 [2024-07-16 01:32:17.953910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.204 [2024-07-16 01:32:17.953916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.204 [2024-07-16 01:32:17.956471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.204 [2024-07-16 01:32:17.965962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.204 [2024-07-16 01:32:17.966259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.204 [2024-07-16 01:32:17.966275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.204 [2024-07-16 01:32:17.966282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.204 [2024-07-16 01:32:17.966455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.204 [2024-07-16 01:32:17.966631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.204 [2024-07-16 01:32:17.966640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.204 [2024-07-16 01:32:17.966646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.204 [2024-07-16 01:32:17.969172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.204 [2024-07-16 01:32:17.978955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.204 [2024-07-16 01:32:17.979318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.204 [2024-07-16 01:32:17.979343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.204 [2024-07-16 01:32:17.979350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.204 [2024-07-16 01:32:17.979522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.204 [2024-07-16 01:32:17.979698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.204 [2024-07-16 01:32:17.979709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.204 [2024-07-16 01:32:17.979717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.204 [2024-07-16 01:32:17.982480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.204 [2024-07-16 01:32:17.992286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.204 [2024-07-16 01:32:17.992674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.204 [2024-07-16 01:32:17.992692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.204 [2024-07-16 01:32:17.992700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.204 [2024-07-16 01:32:17.992883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.204 [2024-07-16 01:32:17.993070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.204 [2024-07-16 01:32:17.993080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.204 [2024-07-16 01:32:17.993086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.204 [2024-07-16 01:32:17.996116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.204 [2024-07-16 01:32:18.005760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.204 [2024-07-16 01:32:18.006218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.204 [2024-07-16 01:32:18.006235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.204 [2024-07-16 01:32:18.006244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.204 [2024-07-16 01:32:18.006446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.204 [2024-07-16 01:32:18.006659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.204 [2024-07-16 01:32:18.006669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.204 [2024-07-16 01:32:18.006677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.204 [2024-07-16 01:32:18.010012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.204 [2024-07-16 01:32:18.019264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.204 [2024-07-16 01:32:18.019717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.204 [2024-07-16 01:32:18.019735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.204 [2024-07-16 01:32:18.019744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.204 [2024-07-16 01:32:18.019952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.204 [2024-07-16 01:32:18.020162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.204 [2024-07-16 01:32:18.020172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.204 [2024-07-16 01:32:18.020180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.204 [2024-07-16 01:32:18.023394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.204 [2024-07-16 01:32:18.032752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.204 [2024-07-16 01:32:18.033208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.204 [2024-07-16 01:32:18.033227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.204 [2024-07-16 01:32:18.033235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.204 [2024-07-16 01:32:18.033438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.204 [2024-07-16 01:32:18.033634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.204 [2024-07-16 01:32:18.033645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.204 [2024-07-16 01:32:18.033652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.204 [2024-07-16 01:32:18.036924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.204 [2024-07-16 01:32:18.046400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.204 [2024-07-16 01:32:18.046853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.204 [2024-07-16 01:32:18.046871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.204 [2024-07-16 01:32:18.046880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.204 [2024-07-16 01:32:18.047089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.204 [2024-07-16 01:32:18.047298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.204 [2024-07-16 01:32:18.047309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.204 [2024-07-16 01:32:18.047316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.204 [2024-07-16 01:32:18.050662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.204 [2024-07-16 01:32:18.060124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.204 [2024-07-16 01:32:18.060523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.204 [2024-07-16 01:32:18.060542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.204 [2024-07-16 01:32:18.060551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.204 [2024-07-16 01:32:18.060759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.204 [2024-07-16 01:32:18.060969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.204 [2024-07-16 01:32:18.060980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.204 [2024-07-16 01:32:18.060989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.204 [2024-07-16 01:32:18.064331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.204 [2024-07-16 01:32:18.073617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.204 [2024-07-16 01:32:18.074087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.204 [2024-07-16 01:32:18.074106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.205 [2024-07-16 01:32:18.074115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.205 [2024-07-16 01:32:18.074323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.205 [2024-07-16 01:32:18.074548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.205 [2024-07-16 01:32:18.074559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.205 [2024-07-16 01:32:18.074567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.205 [2024-07-16 01:32:18.077781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.205 [2024-07-16 01:32:18.087237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.205 [2024-07-16 01:32:18.087698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.205 [2024-07-16 01:32:18.087717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.205 [2024-07-16 01:32:18.087729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.205 [2024-07-16 01:32:18.087937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.205 [2024-07-16 01:32:18.088147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.205 [2024-07-16 01:32:18.088157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.205 [2024-07-16 01:32:18.088165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.205 [2024-07-16 01:32:18.091503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.205 [2024-07-16 01:32:18.100788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.205 [2024-07-16 01:32:18.101244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.205 [2024-07-16 01:32:18.101261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.205 [2024-07-16 01:32:18.101269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.205 [2024-07-16 01:32:18.101471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.205 [2024-07-16 01:32:18.101668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.205 [2024-07-16 01:32:18.101678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.205 [2024-07-16 01:32:18.101685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.205 [2024-07-16 01:32:18.104800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.205 [2024-07-16 01:32:18.114143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.205 [2024-07-16 01:32:18.114612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.205 [2024-07-16 01:32:18.114655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.205 [2024-07-16 01:32:18.114677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.205 [2024-07-16 01:32:18.115254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.205 [2024-07-16 01:32:18.115797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.205 [2024-07-16 01:32:18.115808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.205 [2024-07-16 01:32:18.115815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.205 [2024-07-16 01:32:18.118790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.205 [2024-07-16 01:32:18.127403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.205 [2024-07-16 01:32:18.127844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.205 [2024-07-16 01:32:18.127897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.205 [2024-07-16 01:32:18.127918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.205 [2024-07-16 01:32:18.128509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.205 [2024-07-16 01:32:18.128733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.205 [2024-07-16 01:32:18.128747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.205 [2024-07-16 01:32:18.128755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.205 [2024-07-16 01:32:18.131673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.205 [2024-07-16 01:32:18.140447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.205 [2024-07-16 01:32:18.140878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.205 [2024-07-16 01:32:18.140927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.205 [2024-07-16 01:32:18.140950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.205 [2024-07-16 01:32:18.141531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.205 [2024-07-16 01:32:18.141705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.205 [2024-07-16 01:32:18.141715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.205 [2024-07-16 01:32:18.141721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.205 [2024-07-16 01:32:18.144419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.205 [2024-07-16 01:32:18.153303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.205 [2024-07-16 01:32:18.153665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.205 [2024-07-16 01:32:18.153707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.205 [2024-07-16 01:32:18.153729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.205 [2024-07-16 01:32:18.154306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.205 [2024-07-16 01:32:18.154891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.205 [2024-07-16 01:32:18.154900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.205 [2024-07-16 01:32:18.154906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.205 [2024-07-16 01:32:18.157475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.205 [2024-07-16 01:32:18.166276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.205 [2024-07-16 01:32:18.166645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.205 [2024-07-16 01:32:18.166661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.205 [2024-07-16 01:32:18.166669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.205 [2024-07-16 01:32:18.166841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.205 [2024-07-16 01:32:18.167013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.205 [2024-07-16 01:32:18.167022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.205 [2024-07-16 01:32:18.167029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.205 [2024-07-16 01:32:18.169761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.205 [2024-07-16 01:32:18.179098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.205 [2024-07-16 01:32:18.179550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.205 [2024-07-16 01:32:18.179593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.205 [2024-07-16 01:32:18.179615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.205 [2024-07-16 01:32:18.180132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.205 [2024-07-16 01:32:18.180318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.205 [2024-07-16 01:32:18.180327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.205 [2024-07-16 01:32:18.180333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.465 [2024-07-16 01:32:18.183039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.465 [2024-07-16 01:32:18.191850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.465 [2024-07-16 01:32:18.192183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.465 [2024-07-16 01:32:18.192224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.465 [2024-07-16 01:32:18.192246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.465 [2024-07-16 01:32:18.192782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.465 [2024-07-16 01:32:18.192951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.465 [2024-07-16 01:32:18.192960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.465 [2024-07-16 01:32:18.192966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.465 [2024-07-16 01:32:18.195718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.465 [2024-07-16 01:32:18.204655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.465 [2024-07-16 01:32:18.205010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.465 [2024-07-16 01:32:18.205026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.465 [2024-07-16 01:32:18.205034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.465 [2024-07-16 01:32:18.205201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.465 [2024-07-16 01:32:18.205373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.465 [2024-07-16 01:32:18.205382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.465 [2024-07-16 01:32:18.205389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.465 [2024-07-16 01:32:18.207986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.465 [2024-07-16 01:32:18.217523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.465 [2024-07-16 01:32:18.217881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.465 [2024-07-16 01:32:18.217896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.465 [2024-07-16 01:32:18.217903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.465 [2024-07-16 01:32:18.218073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.465 [2024-07-16 01:32:18.218240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.465 [2024-07-16 01:32:18.218249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.465 [2024-07-16 01:32:18.218256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.465 [2024-07-16 01:32:18.220988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.465 [2024-07-16 01:32:18.230320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.465 [2024-07-16 01:32:18.230646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.465 [2024-07-16 01:32:18.230664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.465 [2024-07-16 01:32:18.230672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.465 [2024-07-16 01:32:18.230830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.465 [2024-07-16 01:32:18.230988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.465 [2024-07-16 01:32:18.230997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.465 [2024-07-16 01:32:18.231003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.465 [2024-07-16 01:32:18.233657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.465 [2024-07-16 01:32:18.243153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.465 [2024-07-16 01:32:18.243505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.465 [2024-07-16 01:32:18.243522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.465 [2024-07-16 01:32:18.243529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.465 [2024-07-16 01:32:18.243688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.465 [2024-07-16 01:32:18.243846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.465 [2024-07-16 01:32:18.243855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.465 [2024-07-16 01:32:18.243861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.465 [2024-07-16 01:32:18.246491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.465 [2024-07-16 01:32:18.256164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.465 [2024-07-16 01:32:18.256570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.465 [2024-07-16 01:32:18.256588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.465 [2024-07-16 01:32:18.256595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.465 [2024-07-16 01:32:18.256762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.465 [2024-07-16 01:32:18.256929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.465 [2024-07-16 01:32:18.256937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.465 [2024-07-16 01:32:18.256948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.465 [2024-07-16 01:32:18.259598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.465 [2024-07-16 01:32:18.269002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.465 [2024-07-16 01:32:18.269403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.465 [2024-07-16 01:32:18.269446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.465 [2024-07-16 01:32:18.269469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.465 [2024-07-16 01:32:18.269915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.465 [2024-07-16 01:32:18.270083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.465 [2024-07-16 01:32:18.270092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.465 [2024-07-16 01:32:18.270098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.465 [2024-07-16 01:32:18.272659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.465 [2024-07-16 01:32:18.281876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.465 [2024-07-16 01:32:18.282241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.465 [2024-07-16 01:32:18.282282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.465 [2024-07-16 01:32:18.282304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.465 [2024-07-16 01:32:18.282898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.465 [2024-07-16 01:32:18.283287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.465 [2024-07-16 01:32:18.283296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.465 [2024-07-16 01:32:18.283302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.465 [2024-07-16 01:32:18.285851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.465 [2024-07-16 01:32:18.294705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.465 [2024-07-16 01:32:18.295124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.465 [2024-07-16 01:32:18.295171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.465 [2024-07-16 01:32:18.295193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.465 [2024-07-16 01:32:18.295786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.465 [2024-07-16 01:32:18.296030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.465 [2024-07-16 01:32:18.296039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.465 [2024-07-16 01:32:18.296046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.465 [2024-07-16 01:32:18.298592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.465 [2024-07-16 01:32:18.307531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.465 [2024-07-16 01:32:18.307977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.465 [2024-07-16 01:32:18.308027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.465 [2024-07-16 01:32:18.308049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.465 [2024-07-16 01:32:18.308643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.465 [2024-07-16 01:32:18.309226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.465 [2024-07-16 01:32:18.309254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.466 [2024-07-16 01:32:18.309261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.466 [2024-07-16 01:32:18.313295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.466 [2024-07-16 01:32:18.321633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.466 [2024-07-16 01:32:18.322058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.466 [2024-07-16 01:32:18.322075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.466 [2024-07-16 01:32:18.322082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.466 [2024-07-16 01:32:18.322264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.466 [2024-07-16 01:32:18.322452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.466 [2024-07-16 01:32:18.322462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.466 [2024-07-16 01:32:18.322468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.466 [2024-07-16 01:32:18.325383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.466 [2024-07-16 01:32:18.334465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.466 [2024-07-16 01:32:18.334860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.466 [2024-07-16 01:32:18.334875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.466 [2024-07-16 01:32:18.334883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.466 [2024-07-16 01:32:18.335040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.466 [2024-07-16 01:32:18.335198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.466 [2024-07-16 01:32:18.335207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.466 [2024-07-16 01:32:18.335213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.466 [2024-07-16 01:32:18.337829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.466 [2024-07-16 01:32:18.347299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.466 [2024-07-16 01:32:18.347719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.466 [2024-07-16 01:32:18.347736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.466 [2024-07-16 01:32:18.347742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.466 [2024-07-16 01:32:18.347901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.466 [2024-07-16 01:32:18.348063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.466 [2024-07-16 01:32:18.348071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.466 [2024-07-16 01:32:18.348077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.466 [2024-07-16 01:32:18.350689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.466 [2024-07-16 01:32:18.360010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.466 [2024-07-16 01:32:18.360421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.466 [2024-07-16 01:32:18.360437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.466 [2024-07-16 01:32:18.360444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.466 [2024-07-16 01:32:18.360603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.466 [2024-07-16 01:32:18.360761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.466 [2024-07-16 01:32:18.360769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.466 [2024-07-16 01:32:18.360775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.466 [2024-07-16 01:32:18.363397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.466 [2024-07-16 01:32:18.372863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.466 [2024-07-16 01:32:18.373302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.466 [2024-07-16 01:32:18.373355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.466 [2024-07-16 01:32:18.373378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.466 [2024-07-16 01:32:18.373956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.466 [2024-07-16 01:32:18.374451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.466 [2024-07-16 01:32:18.374460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.466 [2024-07-16 01:32:18.374467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.466 [2024-07-16 01:32:18.377053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.466 [2024-07-16 01:32:18.385686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.466 [2024-07-16 01:32:18.386091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.466 [2024-07-16 01:32:18.386133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.466 [2024-07-16 01:32:18.386155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.466 [2024-07-16 01:32:18.386640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.466 [2024-07-16 01:32:18.386801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.466 [2024-07-16 01:32:18.386810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.466 [2024-07-16 01:32:18.386816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.466 [2024-07-16 01:32:18.389342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.466 [2024-07-16 01:32:18.398413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.466 [2024-07-16 01:32:18.398847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.466 [2024-07-16 01:32:18.398888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.466 [2024-07-16 01:32:18.398910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.466 [2024-07-16 01:32:18.399412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.466 [2024-07-16 01:32:18.399573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.466 [2024-07-16 01:32:18.399582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.466 [2024-07-16 01:32:18.399588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.466 [2024-07-16 01:32:18.402104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.466 [2024-07-16 01:32:18.411181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.466 [2024-07-16 01:32:18.411614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.466 [2024-07-16 01:32:18.411655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.466 [2024-07-16 01:32:18.411677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.466 [2024-07-16 01:32:18.412242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.466 [2024-07-16 01:32:18.412425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.466 [2024-07-16 01:32:18.412434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.466 [2024-07-16 01:32:18.412441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.466 [2024-07-16 01:32:18.415032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.466 [2024-07-16 01:32:18.423955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.466 [2024-07-16 01:32:18.424385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.466 [2024-07-16 01:32:18.424402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.466 [2024-07-16 01:32:18.424409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.466 [2024-07-16 01:32:18.424576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.466 [2024-07-16 01:32:18.424749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.466 [2024-07-16 01:32:18.424758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.466 [2024-07-16 01:32:18.424764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.466 [2024-07-16 01:32:18.427382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.466 [2024-07-16 01:32:18.436693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.466 [2024-07-16 01:32:18.437105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.466 [2024-07-16 01:32:18.437121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.466 [2024-07-16 01:32:18.437131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.466 [2024-07-16 01:32:18.437290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.466 [2024-07-16 01:32:18.437473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.466 [2024-07-16 01:32:18.437483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.466 [2024-07-16 01:32:18.437489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.466 [2024-07-16 01:32:18.440078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.466 [2024-07-16 01:32:18.449699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.466 [2024-07-16 01:32:18.450109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.466 [2024-07-16 01:32:18.450126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.466 [2024-07-16 01:32:18.450134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.466 [2024-07-16 01:32:18.450306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.466 [2024-07-16 01:32:18.450484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.466 [2024-07-16 01:32:18.450494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.467 [2024-07-16 01:32:18.450500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.725 [2024-07-16 01:32:18.453231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.725 [2024-07-16 01:32:18.462536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.725 [2024-07-16 01:32:18.462878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.725 [2024-07-16 01:32:18.462920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.725 [2024-07-16 01:32:18.462941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.725 [2024-07-16 01:32:18.463439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.725 [2024-07-16 01:32:18.463607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.725 [2024-07-16 01:32:18.463616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.725 [2024-07-16 01:32:18.463622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.725 [2024-07-16 01:32:18.466254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.725 [2024-07-16 01:32:18.475281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.725 [2024-07-16 01:32:18.475661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.725 [2024-07-16 01:32:18.475703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.725 [2024-07-16 01:32:18.475725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.725 [2024-07-16 01:32:18.476305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.725 [2024-07-16 01:32:18.476845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.726 [2024-07-16 01:32:18.476858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.726 [2024-07-16 01:32:18.476864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.726 [2024-07-16 01:32:18.479428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.726 [2024-07-16 01:32:18.488138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.726 [2024-07-16 01:32:18.488555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.726 [2024-07-16 01:32:18.488572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.726 [2024-07-16 01:32:18.488578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.726 [2024-07-16 01:32:18.488737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.726 [2024-07-16 01:32:18.488896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.726 [2024-07-16 01:32:18.488904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.726 [2024-07-16 01:32:18.488910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.726 [2024-07-16 01:32:18.491621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.726 [2024-07-16 01:32:18.501089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.726 [2024-07-16 01:32:18.501536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.726 [2024-07-16 01:32:18.501579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.726 [2024-07-16 01:32:18.501602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.726 [2024-07-16 01:32:18.502179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.726 [2024-07-16 01:32:18.502750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.726 [2024-07-16 01:32:18.502759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.726 [2024-07-16 01:32:18.502765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.726 [2024-07-16 01:32:18.505433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.726 [2024-07-16 01:32:18.513837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.726 [2024-07-16 01:32:18.514260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.726 [2024-07-16 01:32:18.514276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.726 [2024-07-16 01:32:18.514283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.726 [2024-07-16 01:32:18.514466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.726 [2024-07-16 01:32:18.514634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.726 [2024-07-16 01:32:18.514643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.726 [2024-07-16 01:32:18.514649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.726 [2024-07-16 01:32:18.517233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.726 [2024-07-16 01:32:18.526698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.726 [2024-07-16 01:32:18.527106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.726 [2024-07-16 01:32:18.527122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.726 [2024-07-16 01:32:18.527130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.726 [2024-07-16 01:32:18.527288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.726 [2024-07-16 01:32:18.527474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.726 [2024-07-16 01:32:18.527483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.726 [2024-07-16 01:32:18.527490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.726 [2024-07-16 01:32:18.530075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.726 [2024-07-16 01:32:18.539454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.726 [2024-07-16 01:32:18.539879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.726 [2024-07-16 01:32:18.539921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.726 [2024-07-16 01:32:18.539943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.726 [2024-07-16 01:32:18.540543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.726 [2024-07-16 01:32:18.540917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.726 [2024-07-16 01:32:18.540926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.726 [2024-07-16 01:32:18.540932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.726 [2024-07-16 01:32:18.543497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.726 [2024-07-16 01:32:18.552214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.726 [2024-07-16 01:32:18.552636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.726 [2024-07-16 01:32:18.552652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.726 [2024-07-16 01:32:18.552658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.726 [2024-07-16 01:32:18.552817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.726 [2024-07-16 01:32:18.552975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.726 [2024-07-16 01:32:18.552984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.726 [2024-07-16 01:32:18.552990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.726 [2024-07-16 01:32:18.555607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.726 [2024-07-16 01:32:18.565059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.726 [2024-07-16 01:32:18.565480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.726 [2024-07-16 01:32:18.565523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.726 [2024-07-16 01:32:18.565544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.726 [2024-07-16 01:32:18.566004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.726 [2024-07-16 01:32:18.566163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.726 [2024-07-16 01:32:18.566172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.726 [2024-07-16 01:32:18.566178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.726 [2024-07-16 01:32:18.568788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.727 [2024-07-16 01:32:18.577868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.727 [2024-07-16 01:32:18.578279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.727 [2024-07-16 01:32:18.578325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.727 [2024-07-16 01:32:18.578361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.727 [2024-07-16 01:32:18.578878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.727 [2024-07-16 01:32:18.579048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.727 [2024-07-16 01:32:18.579057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.727 [2024-07-16 01:32:18.579063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.727 [2024-07-16 01:32:18.581625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.727 [2024-07-16 01:32:18.590641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.727 [2024-07-16 01:32:18.591035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.727 [2024-07-16 01:32:18.591077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.727 [2024-07-16 01:32:18.591100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.727 [2024-07-16 01:32:18.591692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.727 [2024-07-16 01:32:18.591910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.727 [2024-07-16 01:32:18.591919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.727 [2024-07-16 01:32:18.591925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.727 [2024-07-16 01:32:18.596379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.727 [2024-07-16 01:32:18.604483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.727 [2024-07-16 01:32:18.604835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.727 [2024-07-16 01:32:18.604852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.727 [2024-07-16 01:32:18.604860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.727 [2024-07-16 01:32:18.605042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.727 [2024-07-16 01:32:18.605225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.727 [2024-07-16 01:32:18.605235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.727 [2024-07-16 01:32:18.605246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.727 [2024-07-16 01:32:18.608167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.727 [2024-07-16 01:32:18.617241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.727 [2024-07-16 01:32:18.617585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.727 [2024-07-16 01:32:18.617601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.727 [2024-07-16 01:32:18.617607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.727 [2024-07-16 01:32:18.617765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.727 [2024-07-16 01:32:18.617924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.727 [2024-07-16 01:32:18.617932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.727 [2024-07-16 01:32:18.617939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.727 [2024-07-16 01:32:18.620555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.727 [2024-07-16 01:32:18.630127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.727 [2024-07-16 01:32:18.630490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.727 [2024-07-16 01:32:18.630517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.727 [2024-07-16 01:32:18.630525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.727 [2024-07-16 01:32:18.630683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.727 [2024-07-16 01:32:18.630843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.727 [2024-07-16 01:32:18.630852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.727 [2024-07-16 01:32:18.630861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.727 [2024-07-16 01:32:18.633465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.727 [2024-07-16 01:32:18.642922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.727 [2024-07-16 01:32:18.643202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.727 [2024-07-16 01:32:18.643218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.727 [2024-07-16 01:32:18.643225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.727 [2024-07-16 01:32:18.643397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.727 [2024-07-16 01:32:18.643566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.727 [2024-07-16 01:32:18.643575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.727 [2024-07-16 01:32:18.643581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.727 [2024-07-16 01:32:18.646119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.727 [2024-07-16 01:32:18.655906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.727 [2024-07-16 01:32:18.656316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.727 [2024-07-16 01:32:18.656335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.727 [2024-07-16 01:32:18.656347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.727 [2024-07-16 01:32:18.656529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.727 [2024-07-16 01:32:18.656696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.727 [2024-07-16 01:32:18.656705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.727 [2024-07-16 01:32:18.656711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.727 [2024-07-16 01:32:18.659289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.727 [2024-07-16 01:32:18.668753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.727 [2024-07-16 01:32:18.669093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.727 [2024-07-16 01:32:18.669109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.727 [2024-07-16 01:32:18.669117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.728 [2024-07-16 01:32:18.669275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.728 [2024-07-16 01:32:18.669458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.728 [2024-07-16 01:32:18.669468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.728 [2024-07-16 01:32:18.669474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.728 [2024-07-16 01:32:18.672060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.728 [2024-07-16 01:32:18.681477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.728 [2024-07-16 01:32:18.681894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.728 [2024-07-16 01:32:18.681909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.728 [2024-07-16 01:32:18.681916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.728 [2024-07-16 01:32:18.682074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.728 [2024-07-16 01:32:18.682232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.728 [2024-07-16 01:32:18.682241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.728 [2024-07-16 01:32:18.682247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.728 [2024-07-16 01:32:18.684854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.728 [2024-07-16 01:32:18.694234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.728 [2024-07-16 01:32:18.694649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.728 [2024-07-16 01:32:18.694666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.728 [2024-07-16 01:32:18.694673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.728 [2024-07-16 01:32:18.694830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.728 [2024-07-16 01:32:18.694991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.728 [2024-07-16 01:32:18.695000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.728 [2024-07-16 01:32:18.695006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.728 [2024-07-16 01:32:18.697617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.728 [2024-07-16 01:32:18.707161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.728 [2024-07-16 01:32:18.707511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.728 [2024-07-16 01:32:18.707527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.728 [2024-07-16 01:32:18.707534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.728 [2024-07-16 01:32:18.707708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.728 [2024-07-16 01:32:18.707892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.728 [2024-07-16 01:32:18.707901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.728 [2024-07-16 01:32:18.707910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.728 [2024-07-16 01:32:18.710682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.987 [2024-07-16 01:32:18.720067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.987 [2024-07-16 01:32:18.720502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.987 [2024-07-16 01:32:18.720544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.987 [2024-07-16 01:32:18.720566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.987 [2024-07-16 01:32:18.721143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.987 [2024-07-16 01:32:18.721524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.987 [2024-07-16 01:32:18.721533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.987 [2024-07-16 01:32:18.721539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.987 [2024-07-16 01:32:18.724132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.987 [2024-07-16 01:32:18.732851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.987 [2024-07-16 01:32:18.733183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.987 [2024-07-16 01:32:18.733198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.987 [2024-07-16 01:32:18.733205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.987 [2024-07-16 01:32:18.733383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.987 [2024-07-16 01:32:18.733551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.987 [2024-07-16 01:32:18.733561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.987 [2024-07-16 01:32:18.733567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.987 [2024-07-16 01:32:18.736152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.987 [2024-07-16 01:32:18.745629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.987 [2024-07-16 01:32:18.746012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.987 [2024-07-16 01:32:18.746028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.987 [2024-07-16 01:32:18.746036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.987 [2024-07-16 01:32:18.746194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.987 [2024-07-16 01:32:18.746357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.987 [2024-07-16 01:32:18.746366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.987 [2024-07-16 01:32:18.746372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.987 [2024-07-16 01:32:18.749113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.987 [2024-07-16 01:32:18.758415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.987 [2024-07-16 01:32:18.758829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.987 [2024-07-16 01:32:18.758871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.987 [2024-07-16 01:32:18.758893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.988 [2024-07-16 01:32:18.759400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.988 [2024-07-16 01:32:18.759570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.988 [2024-07-16 01:32:18.759579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.988 [2024-07-16 01:32:18.759585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.988 [2024-07-16 01:32:18.762119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.988 [2024-07-16 01:32:18.771220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.988 [2024-07-16 01:32:18.771618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.988 [2024-07-16 01:32:18.771634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.988 [2024-07-16 01:32:18.771641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.988 [2024-07-16 01:32:18.771799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.988 [2024-07-16 01:32:18.771957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.988 [2024-07-16 01:32:18.771966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.988 [2024-07-16 01:32:18.771972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.988 [2024-07-16 01:32:18.774602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.988 [2024-07-16 01:32:18.784072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.988 [2024-07-16 01:32:18.784412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.988 [2024-07-16 01:32:18.784428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.988 [2024-07-16 01:32:18.784438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.988 [2024-07-16 01:32:18.784595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.988 [2024-07-16 01:32:18.784753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.988 [2024-07-16 01:32:18.784762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.988 [2024-07-16 01:32:18.784768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.988 [2024-07-16 01:32:18.787380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.988 [2024-07-16 01:32:18.796843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.988 [2024-07-16 01:32:18.797267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.988 [2024-07-16 01:32:18.797283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.988 [2024-07-16 01:32:18.797290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.988 [2024-07-16 01:32:18.797474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.988 [2024-07-16 01:32:18.797642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.988 [2024-07-16 01:32:18.797651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.988 [2024-07-16 01:32:18.797657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.988 [2024-07-16 01:32:18.800239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.988 [2024-07-16 01:32:18.809687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.988 [2024-07-16 01:32:18.809943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.988 [2024-07-16 01:32:18.809959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.988 [2024-07-16 01:32:18.809966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.988 [2024-07-16 01:32:18.810124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.988 [2024-07-16 01:32:18.810282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.988 [2024-07-16 01:32:18.810290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.988 [2024-07-16 01:32:18.810296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.988 [2024-07-16 01:32:18.812909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.988 [2024-07-16 01:32:18.822466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.988 [2024-07-16 01:32:18.822879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.988 [2024-07-16 01:32:18.822895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.988 [2024-07-16 01:32:18.822902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.988 [2024-07-16 01:32:18.823060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.988 [2024-07-16 01:32:18.823218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.988 [2024-07-16 01:32:18.823229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.988 [2024-07-16 01:32:18.823234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.988 [2024-07-16 01:32:18.825852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.988 [2024-07-16 01:32:18.835173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.988 [2024-07-16 01:32:18.835515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.988 [2024-07-16 01:32:18.835531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.988 [2024-07-16 01:32:18.835537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.988 [2024-07-16 01:32:18.835695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.988 [2024-07-16 01:32:18.835855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.988 [2024-07-16 01:32:18.835863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.988 [2024-07-16 01:32:18.835869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.988 [2024-07-16 01:32:18.838483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.988 [2024-07-16 01:32:18.848011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.988 [2024-07-16 01:32:18.848418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.988 [2024-07-16 01:32:18.848461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.988 [2024-07-16 01:32:18.848483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.988 [2024-07-16 01:32:18.849060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.988 [2024-07-16 01:32:18.849437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.988 [2024-07-16 01:32:18.849447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.988 [2024-07-16 01:32:18.849453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.988 [2024-07-16 01:32:18.852021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.988 [2024-07-16 01:32:18.860829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.988 [2024-07-16 01:32:18.861230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.988 [2024-07-16 01:32:18.861271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.988 [2024-07-16 01:32:18.861294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.988 [2024-07-16 01:32:18.861759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.988 [2024-07-16 01:32:18.861929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.988 [2024-07-16 01:32:18.861938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.988 [2024-07-16 01:32:18.861944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.988 [2024-07-16 01:32:18.864507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.988 [2024-07-16 01:32:18.873665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.988 [2024-07-16 01:32:18.873989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.988 [2024-07-16 01:32:18.874005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.988 [2024-07-16 01:32:18.874012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.988 [2024-07-16 01:32:18.874170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.988 [2024-07-16 01:32:18.874328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.988 [2024-07-16 01:32:18.874344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.988 [2024-07-16 01:32:18.874352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.988 [2024-07-16 01:32:18.877009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.988 [2024-07-16 01:32:18.886407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.988 [2024-07-16 01:32:18.886821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.988 [2024-07-16 01:32:18.886873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.988 [2024-07-16 01:32:18.886895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.988 [2024-07-16 01:32:18.887449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.988 [2024-07-16 01:32:18.887618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.988 [2024-07-16 01:32:18.887627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.988 [2024-07-16 01:32:18.887633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.988 [2024-07-16 01:32:18.890216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.988 [2024-07-16 01:32:18.899239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.988 [2024-07-16 01:32:18.899677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.988 [2024-07-16 01:32:18.899719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.989 [2024-07-16 01:32:18.899741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.989 [2024-07-16 01:32:18.900316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.989 [2024-07-16 01:32:18.900804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.989 [2024-07-16 01:32:18.900814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.989 [2024-07-16 01:32:18.900820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.989 [2024-07-16 01:32:18.903386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.989 [2024-07-16 01:32:18.912019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.989 [2024-07-16 01:32:18.912364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.989 [2024-07-16 01:32:18.912380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.989 [2024-07-16 01:32:18.912387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.989 [2024-07-16 01:32:18.912548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.989 [2024-07-16 01:32:18.912706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.989 [2024-07-16 01:32:18.912715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.989 [2024-07-16 01:32:18.912720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.989 [2024-07-16 01:32:18.915335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.989 [2024-07-16 01:32:18.924792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.989 [2024-07-16 01:32:18.925188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.989 [2024-07-16 01:32:18.925205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.989 [2024-07-16 01:32:18.925211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.989 [2024-07-16 01:32:18.925375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.989 [2024-07-16 01:32:18.925557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.989 [2024-07-16 01:32:18.925567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.989 [2024-07-16 01:32:18.925573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.989 [2024-07-16 01:32:18.928152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.989 [2024-07-16 01:32:18.937621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.989 [2024-07-16 01:32:18.938018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.989 [2024-07-16 01:32:18.938034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.989 [2024-07-16 01:32:18.938041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.989 [2024-07-16 01:32:18.938199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.989 [2024-07-16 01:32:18.938363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.989 [2024-07-16 01:32:18.938388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.989 [2024-07-16 01:32:18.938394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.989 [2024-07-16 01:32:18.940984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.989 [2024-07-16 01:32:18.950464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.989 [2024-07-16 01:32:18.950879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.989 [2024-07-16 01:32:18.950924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.989 [2024-07-16 01:32:18.950946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.989 [2024-07-16 01:32:18.951533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.989 [2024-07-16 01:32:18.951701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.989 [2024-07-16 01:32:18.951710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.989 [2024-07-16 01:32:18.951719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.989 [2024-07-16 01:32:18.954298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.989 [2024-07-16 01:32:18.963318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:52.989 [2024-07-16 01:32:18.963738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.989 [2024-07-16 01:32:18.963754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:52.989 [2024-07-16 01:32:18.963760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:52.989 [2024-07-16 01:32:18.963918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:52.989 [2024-07-16 01:32:18.964077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:52.989 [2024-07-16 01:32:18.964086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:52.989 [2024-07-16 01:32:18.964092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:52.989 [2024-07-16 01:32:18.966710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.247 [2024-07-16 01:32:18.976240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.247 [2024-07-16 01:32:18.976658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.248 [2024-07-16 01:32:18.976674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.248 [2024-07-16 01:32:18.976680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.248 [2024-07-16 01:32:18.976838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.248 [2024-07-16 01:32:18.976997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.248 [2024-07-16 01:32:18.977005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.248 [2024-07-16 01:32:18.977012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.248 [2024-07-16 01:32:18.979747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.248 [2024-07-16 01:32:18.989051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.248 [2024-07-16 01:32:18.989502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.248 [2024-07-16 01:32:18.989547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.248 [2024-07-16 01:32:18.989569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.248 [2024-07-16 01:32:18.989759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.248 [2024-07-16 01:32:18.989919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.248 [2024-07-16 01:32:18.989928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.248 [2024-07-16 01:32:18.989933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.248 [2024-07-16 01:32:18.992540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.248 [2024-07-16 01:32:19.001807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.248 [2024-07-16 01:32:19.002273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.248 [2024-07-16 01:32:19.002288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.248 [2024-07-16 01:32:19.002294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.248 [2024-07-16 01:32:19.002478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.248 [2024-07-16 01:32:19.002646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.248 [2024-07-16 01:32:19.002655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.248 [2024-07-16 01:32:19.002661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.248 [2024-07-16 01:32:19.005400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.248 [2024-07-16 01:32:19.014697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.248 [2024-07-16 01:32:19.015107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.248 [2024-07-16 01:32:19.015148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.248 [2024-07-16 01:32:19.015170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.248 [2024-07-16 01:32:19.015692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.248 [2024-07-16 01:32:19.015852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.248 [2024-07-16 01:32:19.015861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.248 [2024-07-16 01:32:19.015867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.248 [2024-07-16 01:32:19.018412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.248 [2024-07-16 01:32:19.027513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.248 [2024-07-16 01:32:19.027907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.248 [2024-07-16 01:32:19.027946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.248 [2024-07-16 01:32:19.027969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.248 [2024-07-16 01:32:19.028559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.248 [2024-07-16 01:32:19.029042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.248 [2024-07-16 01:32:19.029051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.248 [2024-07-16 01:32:19.029057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.248 [2024-07-16 01:32:19.031613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.248 [2024-07-16 01:32:19.040260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.248 [2024-07-16 01:32:19.040675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.248 [2024-07-16 01:32:19.040692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.248 [2024-07-16 01:32:19.040700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.248 [2024-07-16 01:32:19.040867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.248 [2024-07-16 01:32:19.041038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.248 [2024-07-16 01:32:19.041047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.248 [2024-07-16 01:32:19.041053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.248 [2024-07-16 01:32:19.043619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.248 [2024-07-16 01:32:19.053205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.248 [2024-07-16 01:32:19.053639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.248 [2024-07-16 01:32:19.053683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.248 [2024-07-16 01:32:19.053705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.248 [2024-07-16 01:32:19.053934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.248 [2024-07-16 01:32:19.054094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.248 [2024-07-16 01:32:19.054103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.248 [2024-07-16 01:32:19.054108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.248 [2024-07-16 01:32:19.056722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.248 [2024-07-16 01:32:19.066041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.248 [2024-07-16 01:32:19.066443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.248 [2024-07-16 01:32:19.066459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.248 [2024-07-16 01:32:19.066466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.248 [2024-07-16 01:32:19.066624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.248 [2024-07-16 01:32:19.066783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.248 [2024-07-16 01:32:19.066791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.248 [2024-07-16 01:32:19.066797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.248 [2024-07-16 01:32:19.069409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.248 [2024-07-16 01:32:19.078791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.248 [2024-07-16 01:32:19.079206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.248 [2024-07-16 01:32:19.079223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.248 [2024-07-16 01:32:19.079229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.248 [2024-07-16 01:32:19.079395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.248 [2024-07-16 01:32:19.079554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.248 [2024-07-16 01:32:19.079563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.248 [2024-07-16 01:32:19.079569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.248 [2024-07-16 01:32:19.082129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.248 [2024-07-16 01:32:19.091608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.248 [2024-07-16 01:32:19.092026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.248 [2024-07-16 01:32:19.092042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.248 [2024-07-16 01:32:19.092048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.248 [2024-07-16 01:32:19.092206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.248 [2024-07-16 01:32:19.092387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.248 [2024-07-16 01:32:19.092397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.248 [2024-07-16 01:32:19.092404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.248 [2024-07-16 01:32:19.094996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.248 [2024-07-16 01:32:19.104394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.248 [2024-07-16 01:32:19.104823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.248 [2024-07-16 01:32:19.104864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.248 [2024-07-16 01:32:19.104886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.248 [2024-07-16 01:32:19.105479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.248 [2024-07-16 01:32:19.106060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.248 [2024-07-16 01:32:19.106070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.248 [2024-07-16 01:32:19.106077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.248 [2024-07-16 01:32:19.108831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.248 [2024-07-16 01:32:19.117408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.249 [2024-07-16 01:32:19.117833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.249 [2024-07-16 01:32:19.117850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.249 [2024-07-16 01:32:19.117857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.249 [2024-07-16 01:32:19.118028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.249 [2024-07-16 01:32:19.118200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.249 [2024-07-16 01:32:19.118210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.249 [2024-07-16 01:32:19.118216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.249 [2024-07-16 01:32:19.120969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.249 [2024-07-16 01:32:19.130360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.249 [2024-07-16 01:32:19.130789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.249 [2024-07-16 01:32:19.130805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.249 [2024-07-16 01:32:19.130816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.249 [2024-07-16 01:32:19.130987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.249 [2024-07-16 01:32:19.131158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.249 [2024-07-16 01:32:19.131168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.249 [2024-07-16 01:32:19.131174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.249 [2024-07-16 01:32:19.133921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.249 [2024-07-16 01:32:19.143333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.249 [2024-07-16 01:32:19.143768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.249 [2024-07-16 01:32:19.143784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.249 [2024-07-16 01:32:19.143791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.249 [2024-07-16 01:32:19.143964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.249 [2024-07-16 01:32:19.144135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.249 [2024-07-16 01:32:19.144144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.249 [2024-07-16 01:32:19.144151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.249 [2024-07-16 01:32:19.146903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.249 [2024-07-16 01:32:19.156280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.249 [2024-07-16 01:32:19.156713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.249 [2024-07-16 01:32:19.156730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.249 [2024-07-16 01:32:19.156737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.249 [2024-07-16 01:32:19.156909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.249 [2024-07-16 01:32:19.157082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.249 [2024-07-16 01:32:19.157091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.249 [2024-07-16 01:32:19.157098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.249 [2024-07-16 01:32:19.159842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.249 [2024-07-16 01:32:19.169267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.249 [2024-07-16 01:32:19.169622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.249 [2024-07-16 01:32:19.169638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.249 [2024-07-16 01:32:19.169645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.249 [2024-07-16 01:32:19.169817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.249 [2024-07-16 01:32:19.169990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.249 [2024-07-16 01:32:19.170004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.249 [2024-07-16 01:32:19.170010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.249 [2024-07-16 01:32:19.172759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.249 [2024-07-16 01:32:19.182304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.249 [2024-07-16 01:32:19.182665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.249 [2024-07-16 01:32:19.182682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.249 [2024-07-16 01:32:19.182689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.249 [2024-07-16 01:32:19.182862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.249 [2024-07-16 01:32:19.183036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.249 [2024-07-16 01:32:19.183045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.249 [2024-07-16 01:32:19.183052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.249 [2024-07-16 01:32:19.185799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.249 [2024-07-16 01:32:19.195319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.249 [2024-07-16 01:32:19.195697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.249 [2024-07-16 01:32:19.195714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.249 [2024-07-16 01:32:19.195723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.249 [2024-07-16 01:32:19.195903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.249 [2024-07-16 01:32:19.196086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.249 [2024-07-16 01:32:19.196096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.249 [2024-07-16 01:32:19.196103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.249 [2024-07-16 01:32:19.198891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.249 [2024-07-16 01:32:19.208664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.249 [2024-07-16 01:32:19.209092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.249 [2024-07-16 01:32:19.209108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.249 [2024-07-16 01:32:19.209115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.249 [2024-07-16 01:32:19.209287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.249 [2024-07-16 01:32:19.209467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.249 [2024-07-16 01:32:19.209477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.249 [2024-07-16 01:32:19.209483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.249 [2024-07-16 01:32:19.212228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.249 [2024-07-16 01:32:19.221709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.249 [2024-07-16 01:32:19.222147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.249 [2024-07-16 01:32:19.222164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.249 [2024-07-16 01:32:19.222172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.249 [2024-07-16 01:32:19.222350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.249 [2024-07-16 01:32:19.222524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.249 [2024-07-16 01:32:19.222533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.249 [2024-07-16 01:32:19.222540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.249 [2024-07-16 01:32:19.225280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.508 [2024-07-16 01:32:19.234836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.508 [2024-07-16 01:32:19.235280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.508 [2024-07-16 01:32:19.235297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.508 [2024-07-16 01:32:19.235305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.508 [2024-07-16 01:32:19.235495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.508 [2024-07-16 01:32:19.235680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.508 [2024-07-16 01:32:19.235690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.508 [2024-07-16 01:32:19.235696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.508 [2024-07-16 01:32:19.238575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.508 [2024-07-16 01:32:19.248055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.508 [2024-07-16 01:32:19.248372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.508 [2024-07-16 01:32:19.248390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.508 [2024-07-16 01:32:19.248398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.508 [2024-07-16 01:32:19.248589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.508 [2024-07-16 01:32:19.248763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.508 [2024-07-16 01:32:19.248773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.508 [2024-07-16 01:32:19.248779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.508 [2024-07-16 01:32:19.251934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.508 [2024-07-16 01:32:19.261151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.508 [2024-07-16 01:32:19.261513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.508 [2024-07-16 01:32:19.261531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.508 [2024-07-16 01:32:19.261538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.508 [2024-07-16 01:32:19.261714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.508 [2024-07-16 01:32:19.261888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.508 [2024-07-16 01:32:19.261897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.508 [2024-07-16 01:32:19.261904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.508 [2024-07-16 01:32:19.264589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.508 [2024-07-16 01:32:19.274003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.508 [2024-07-16 01:32:19.274284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.508 [2024-07-16 01:32:19.274300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.508 [2024-07-16 01:32:19.274306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.508 [2024-07-16 01:32:19.274491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.508 [2024-07-16 01:32:19.274659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.509 [2024-07-16 01:32:19.274668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.509 [2024-07-16 01:32:19.274674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.509 [2024-07-16 01:32:19.277344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.509 [2024-07-16 01:32:19.286990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.509 [2024-07-16 01:32:19.287346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.509 [2024-07-16 01:32:19.287363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.509 [2024-07-16 01:32:19.287371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.509 [2024-07-16 01:32:19.287538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.509 [2024-07-16 01:32:19.287707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.509 [2024-07-16 01:32:19.287717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.509 [2024-07-16 01:32:19.287724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.509 [2024-07-16 01:32:19.290365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.509 [2024-07-16 01:32:19.299893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.509 [2024-07-16 01:32:19.300233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.509 [2024-07-16 01:32:19.300249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.509 [2024-07-16 01:32:19.300257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.509 [2024-07-16 01:32:19.300422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.509 [2024-07-16 01:32:19.300581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.509 [2024-07-16 01:32:19.300590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.509 [2024-07-16 01:32:19.300600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.509 [2024-07-16 01:32:19.303219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.509 [2024-07-16 01:32:19.312763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.509 [2024-07-16 01:32:19.313037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.509 [2024-07-16 01:32:19.313053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.509 [2024-07-16 01:32:19.313060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.509 [2024-07-16 01:32:19.313218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.509 [2024-07-16 01:32:19.313383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.509 [2024-07-16 01:32:19.313393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.509 [2024-07-16 01:32:19.313399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.509 [2024-07-16 01:32:19.315978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.509 [2024-07-16 01:32:19.325642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.509 [2024-07-16 01:32:19.325911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.509 [2024-07-16 01:32:19.325927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.509 [2024-07-16 01:32:19.325934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.509 [2024-07-16 01:32:19.326091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.509 [2024-07-16 01:32:19.326249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.509 [2024-07-16 01:32:19.326258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.509 [2024-07-16 01:32:19.326264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.509 [2024-07-16 01:32:19.328951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.509 [2024-07-16 01:32:19.338650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.509 [2024-07-16 01:32:19.339055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.509 [2024-07-16 01:32:19.339071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.509 [2024-07-16 01:32:19.339078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.509 [2024-07-16 01:32:19.339246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.509 [2024-07-16 01:32:19.339438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.509 [2024-07-16 01:32:19.339448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.509 [2024-07-16 01:32:19.339455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.509 [2024-07-16 01:32:19.342193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.509 [2024-07-16 01:32:19.351434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.509 [2024-07-16 01:32:19.351807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.509 [2024-07-16 01:32:19.351822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.509 [2024-07-16 01:32:19.351830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.509 [2024-07-16 01:32:19.351989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.509 [2024-07-16 01:32:19.352148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.509 [2024-07-16 01:32:19.352156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.509 [2024-07-16 01:32:19.352162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.509 [2024-07-16 01:32:19.354819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.509 [2024-07-16 01:32:19.364304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.509 [2024-07-16 01:32:19.364588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.509 [2024-07-16 01:32:19.364604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.509 [2024-07-16 01:32:19.364610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.509 [2024-07-16 01:32:19.364767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.509 [2024-07-16 01:32:19.364925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.509 [2024-07-16 01:32:19.364934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.509 [2024-07-16 01:32:19.364940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.509 [2024-07-16 01:32:19.367547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.509 [2024-07-16 01:32:19.377191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.509 [2024-07-16 01:32:19.377477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.509 [2024-07-16 01:32:19.377493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.509 [2024-07-16 01:32:19.377499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.509 [2024-07-16 01:32:19.377658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.509 [2024-07-16 01:32:19.377816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.509 [2024-07-16 01:32:19.377825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.509 [2024-07-16 01:32:19.377831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.510 [2024-07-16 01:32:19.380434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.510 [2024-07-16 01:32:19.390076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.510 [2024-07-16 01:32:19.390478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.510 [2024-07-16 01:32:19.390495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.510 [2024-07-16 01:32:19.390502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.510 [2024-07-16 01:32:19.390676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.510 [2024-07-16 01:32:19.390839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.510 [2024-07-16 01:32:19.390848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.510 [2024-07-16 01:32:19.390854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.510 [2024-07-16 01:32:19.393405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.510 [2024-07-16 01:32:19.402901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.510 [2024-07-16 01:32:19.403315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.510 [2024-07-16 01:32:19.403332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.510 [2024-07-16 01:32:19.403347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.510 [2024-07-16 01:32:19.403505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.510 [2024-07-16 01:32:19.403666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.510 [2024-07-16 01:32:19.403675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.510 [2024-07-16 01:32:19.403681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.510 [2024-07-16 01:32:19.406268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.510 [2024-07-16 01:32:19.415795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.510 [2024-07-16 01:32:19.416204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.510 [2024-07-16 01:32:19.416220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.510 [2024-07-16 01:32:19.416227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.510 [2024-07-16 01:32:19.416390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.510 [2024-07-16 01:32:19.416549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.510 [2024-07-16 01:32:19.416558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.510 [2024-07-16 01:32:19.416564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.510 [2024-07-16 01:32:19.419144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.510 [2024-07-16 01:32:19.428669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.510 [2024-07-16 01:32:19.429008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.510 [2024-07-16 01:32:19.429050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.510 [2024-07-16 01:32:19.429072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.510 [2024-07-16 01:32:19.429664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.510 [2024-07-16 01:32:19.430191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.510 [2024-07-16 01:32:19.430205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.510 [2024-07-16 01:32:19.430216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.510 [2024-07-16 01:32:19.434680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.510 [2024-07-16 01:32:19.442446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.510 [2024-07-16 01:32:19.442812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.510 [2024-07-16 01:32:19.442855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.510 [2024-07-16 01:32:19.442878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.510 [2024-07-16 01:32:19.443471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.510 [2024-07-16 01:32:19.444053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.510 [2024-07-16 01:32:19.444078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.510 [2024-07-16 01:32:19.444102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.510 [2024-07-16 01:32:19.447013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.510 [2024-07-16 01:32:19.455333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.510 [2024-07-16 01:32:19.455681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.510 [2024-07-16 01:32:19.455697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.510 [2024-07-16 01:32:19.455704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.510 [2024-07-16 01:32:19.455862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.510 [2024-07-16 01:32:19.456021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.510 [2024-07-16 01:32:19.456030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.510 [2024-07-16 01:32:19.456036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.510 [2024-07-16 01:32:19.458654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.510 [2024-07-16 01:32:19.468280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.510 [2024-07-16 01:32:19.468562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.510 [2024-07-16 01:32:19.468578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.510 [2024-07-16 01:32:19.468584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.510 [2024-07-16 01:32:19.468743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.510 [2024-07-16 01:32:19.468902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.510 [2024-07-16 01:32:19.468910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.510 [2024-07-16 01:32:19.468916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.510 [2024-07-16 01:32:19.471521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.510 [2024-07-16 01:32:19.481148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.510 [2024-07-16 01:32:19.481490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.510 [2024-07-16 01:32:19.481506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.510 [2024-07-16 01:32:19.481517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.510 [2024-07-16 01:32:19.481676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.510 [2024-07-16 01:32:19.481834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.510 [2024-07-16 01:32:19.481843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.510 [2024-07-16 01:32:19.481849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.511 [2024-07-16 01:32:19.484453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.511 [2024-07-16 01:32:19.494202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.511 [2024-07-16 01:32:19.494616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-07-16 01:32:19.494633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.770 [2024-07-16 01:32:19.494640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.770 [2024-07-16 01:32:19.494812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.770 [2024-07-16 01:32:19.494984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.770 [2024-07-16 01:32:19.494993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.770 [2024-07-16 01:32:19.494999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.770 [2024-07-16 01:32:19.497651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.770 [2024-07-16 01:32:19.507039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.770 [2024-07-16 01:32:19.507450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-07-16 01:32:19.507467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.770 [2024-07-16 01:32:19.507473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.770 [2024-07-16 01:32:19.507632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.770 [2024-07-16 01:32:19.507790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.770 [2024-07-16 01:32:19.507799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.770 [2024-07-16 01:32:19.507806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.770 [2024-07-16 01:32:19.510534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.770 [2024-07-16 01:32:19.520020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.770 [2024-07-16 01:32:19.520421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-07-16 01:32:19.520464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.770 [2024-07-16 01:32:19.520486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.770 [2024-07-16 01:32:19.520945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.770 [2024-07-16 01:32:19.521105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.770 [2024-07-16 01:32:19.521117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.770 [2024-07-16 01:32:19.521123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.770 [2024-07-16 01:32:19.525276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.770 [2024-07-16 01:32:19.533681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.770 [2024-07-16 01:32:19.534137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-07-16 01:32:19.534180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.770 [2024-07-16 01:32:19.534202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.770 [2024-07-16 01:32:19.534793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.770 [2024-07-16 01:32:19.535231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.770 [2024-07-16 01:32:19.535240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.770 [2024-07-16 01:32:19.535247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.770 [2024-07-16 01:32:19.538163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.770 [2024-07-16 01:32:19.546527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.770 [2024-07-16 01:32:19.546964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.770 [2024-07-16 01:32:19.547006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.770 [2024-07-16 01:32:19.547027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.771 [2024-07-16 01:32:19.547507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.771 [2024-07-16 01:32:19.547667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-07-16 01:32:19.547676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-07-16 01:32:19.547682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-07-16 01:32:19.550265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-07-16 01:32:19.559409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-07-16 01:32:19.559814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-07-16 01:32:19.559829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.771 [2024-07-16 01:32:19.559836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.771 [2024-07-16 01:32:19.559995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.771 [2024-07-16 01:32:19.560153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-07-16 01:32:19.560162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-07-16 01:32:19.560168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-07-16 01:32:19.562789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-07-16 01:32:19.572291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-07-16 01:32:19.572703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-07-16 01:32:19.572740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.771 [2024-07-16 01:32:19.572763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.771 [2024-07-16 01:32:19.573305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.771 [2024-07-16 01:32:19.573489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-07-16 01:32:19.573497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-07-16 01:32:19.573503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-07-16 01:32:19.576088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-07-16 01:32:19.585188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-07-16 01:32:19.585601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-07-16 01:32:19.585617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.771 [2024-07-16 01:32:19.585624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.771 [2024-07-16 01:32:19.585782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.771 [2024-07-16 01:32:19.585941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-07-16 01:32:19.585950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-07-16 01:32:19.585956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-07-16 01:32:19.588567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-07-16 01:32:19.597943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-07-16 01:32:19.598223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-07-16 01:32:19.598238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.771 [2024-07-16 01:32:19.598244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.771 [2024-07-16 01:32:19.598426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.771 [2024-07-16 01:32:19.598594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-07-16 01:32:19.598604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-07-16 01:32:19.598610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-07-16 01:32:19.601278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-07-16 01:32:19.610766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-07-16 01:32:19.611182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-07-16 01:32:19.611198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.771 [2024-07-16 01:32:19.611205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.771 [2024-07-16 01:32:19.611389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.771 [2024-07-16 01:32:19.611558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-07-16 01:32:19.611567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-07-16 01:32:19.611573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-07-16 01:32:19.614158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-07-16 01:32:19.623608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-07-16 01:32:19.624024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-07-16 01:32:19.624039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.771 [2024-07-16 01:32:19.624046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.771 [2024-07-16 01:32:19.624205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.771 [2024-07-16 01:32:19.624386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-07-16 01:32:19.624396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-07-16 01:32:19.624403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-07-16 01:32:19.626995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-07-16 01:32:19.636317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-07-16 01:32:19.636737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-07-16 01:32:19.636793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.771 [2024-07-16 01:32:19.636815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.771 [2024-07-16 01:32:19.637352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.771 [2024-07-16 01:32:19.637536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-07-16 01:32:19.637544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-07-16 01:32:19.637551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-07-16 01:32:19.640145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-07-16 01:32:19.649135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-07-16 01:32:19.649501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-07-16 01:32:19.649517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.771 [2024-07-16 01:32:19.649524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.771 [2024-07-16 01:32:19.649691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.771 [2024-07-16 01:32:19.649863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-07-16 01:32:19.649873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-07-16 01:32:19.649883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-07-16 01:32:19.652499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-07-16 01:32:19.661979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-07-16 01:32:19.662422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-07-16 01:32:19.662465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.771 [2024-07-16 01:32:19.662487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.771 [2024-07-16 01:32:19.663066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.771 [2024-07-16 01:32:19.663331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-07-16 01:32:19.663342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-07-16 01:32:19.663350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-07-16 01:32:19.665912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-07-16 01:32:19.674870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-07-16 01:32:19.675282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-07-16 01:32:19.675324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.771 [2024-07-16 01:32:19.675361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.771 [2024-07-16 01:32:19.675939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.771 [2024-07-16 01:32:19.676527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.771 [2024-07-16 01:32:19.676561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.771 [2024-07-16 01:32:19.676567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.771 [2024-07-16 01:32:19.679180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.771 [2024-07-16 01:32:19.687703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.771 [2024-07-16 01:32:19.688132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.771 [2024-07-16 01:32:19.688175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.771 [2024-07-16 01:32:19.688197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.772 [2024-07-16 01:32:19.688597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.772 [2024-07-16 01:32:19.688767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.772 [2024-07-16 01:32:19.688776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.772 [2024-07-16 01:32:19.688782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.772 [2024-07-16 01:32:19.691386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.772 [2024-07-16 01:32:19.700559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.772 [2024-07-16 01:32:19.700913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-07-16 01:32:19.700928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.772 [2024-07-16 01:32:19.700934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.772 [2024-07-16 01:32:19.701092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.772 [2024-07-16 01:32:19.701251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.772 [2024-07-16 01:32:19.701259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.772 [2024-07-16 01:32:19.701266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.772 [2024-07-16 01:32:19.703884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.772 [2024-07-16 01:32:19.713357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.772 [2024-07-16 01:32:19.713769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-07-16 01:32:19.713819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.772 [2024-07-16 01:32:19.713840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.772 [2024-07-16 01:32:19.714424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.772 [2024-07-16 01:32:19.714593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.772 [2024-07-16 01:32:19.714602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.772 [2024-07-16 01:32:19.714608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.772 [2024-07-16 01:32:19.717188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.772 [2024-07-16 01:32:19.726200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.772 [2024-07-16 01:32:19.726617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-07-16 01:32:19.726634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.772 [2024-07-16 01:32:19.726641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.772 [2024-07-16 01:32:19.726798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.772 [2024-07-16 01:32:19.726956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.772 [2024-07-16 01:32:19.726964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.772 [2024-07-16 01:32:19.726970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.772 [2024-07-16 01:32:19.729766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.772 [2024-07-16 01:32:19.738932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.772 [2024-07-16 01:32:19.739329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-07-16 01:32:19.739378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.772 [2024-07-16 01:32:19.739402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.772 [2024-07-16 01:32:19.739944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.772 [2024-07-16 01:32:19.740107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.772 [2024-07-16 01:32:19.740115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.772 [2024-07-16 01:32:19.740121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.772 [2024-07-16 01:32:19.742734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:53.772 [2024-07-16 01:32:19.751754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:53.772 [2024-07-16 01:32:19.752188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.772 [2024-07-16 01:32:19.752205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:53.772 [2024-07-16 01:32:19.752212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:53.772 [2024-07-16 01:32:19.752391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:53.772 [2024-07-16 01:32:19.752563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:53.772 [2024-07-16 01:32:19.752572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:53.772 [2024-07-16 01:32:19.752579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:53.772 [2024-07-16 01:32:19.755334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.031 [2024-07-16 01:32:19.764664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.031 [2024-07-16 01:32:19.765118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.031 [2024-07-16 01:32:19.765159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.031 [2024-07-16 01:32:19.765181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.031 [2024-07-16 01:32:19.765692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.031 [2024-07-16 01:32:19.765860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.031 [2024-07-16 01:32:19.765868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.031 [2024-07-16 01:32:19.765874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.031 [2024-07-16 01:32:19.768643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.032 [2024-07-16 01:32:19.777624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.032 [2024-07-16 01:32:19.778043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.032 [2024-07-16 01:32:19.778060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.032 [2024-07-16 01:32:19.778066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.032 [2024-07-16 01:32:19.778233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.032 [2024-07-16 01:32:19.778407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.032 [2024-07-16 01:32:19.778417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.032 [2024-07-16 01:32:19.778424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.032 [2024-07-16 01:32:19.780971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.032 [2024-07-16 01:32:19.790443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.032 [2024-07-16 01:32:19.790882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.032 [2024-07-16 01:32:19.790924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.032 [2024-07-16 01:32:19.790946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.032 [2024-07-16 01:32:19.791535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.032 [2024-07-16 01:32:19.792014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.032 [2024-07-16 01:32:19.792023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.032 [2024-07-16 01:32:19.792030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.032 [2024-07-16 01:32:19.794592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.032 [2024-07-16 01:32:19.803172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.032 [2024-07-16 01:32:19.803566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.032 [2024-07-16 01:32:19.803583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.032 [2024-07-16 01:32:19.803589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.032 [2024-07-16 01:32:19.803749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.032 [2024-07-16 01:32:19.803908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.032 [2024-07-16 01:32:19.803917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.032 [2024-07-16 01:32:19.803923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.032 [2024-07-16 01:32:19.806540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.032 [2024-07-16 01:32:19.816004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.032 [2024-07-16 01:32:19.816416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.032 [2024-07-16 01:32:19.816432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.032 [2024-07-16 01:32:19.816439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.032 [2024-07-16 01:32:19.816597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.032 [2024-07-16 01:32:19.816756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.032 [2024-07-16 01:32:19.816764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.032 [2024-07-16 01:32:19.816770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.032 [2024-07-16 01:32:19.819383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.032 [2024-07-16 01:32:19.828749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.032 [2024-07-16 01:32:19.829141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.032 [2024-07-16 01:32:19.829157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.032 [2024-07-16 01:32:19.829168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.032 [2024-07-16 01:32:19.829327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.032 [2024-07-16 01:32:19.829512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.032 [2024-07-16 01:32:19.829522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.032 [2024-07-16 01:32:19.829528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.032 [2024-07-16 01:32:19.832110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.032 [2024-07-16 01:32:19.841565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.032 [2024-07-16 01:32:19.841976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.032 [2024-07-16 01:32:19.841992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.032 [2024-07-16 01:32:19.841999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.032 [2024-07-16 01:32:19.842156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.032 [2024-07-16 01:32:19.842314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.032 [2024-07-16 01:32:19.842323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.032 [2024-07-16 01:32:19.842329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.032 [2024-07-16 01:32:19.844941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.032 [2024-07-16 01:32:19.854424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.032 [2024-07-16 01:32:19.854823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.032 [2024-07-16 01:32:19.854840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.032 [2024-07-16 01:32:19.854846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.032 [2024-07-16 01:32:19.855004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.032 [2024-07-16 01:32:19.855162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.032 [2024-07-16 01:32:19.855171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.032 [2024-07-16 01:32:19.855177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.032 [2024-07-16 01:32:19.857789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.032 [2024-07-16 01:32:19.867251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.032 [2024-07-16 01:32:19.867603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.032 [2024-07-16 01:32:19.867619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.032 [2024-07-16 01:32:19.867625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.032 [2024-07-16 01:32:19.867783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.032 [2024-07-16 01:32:19.867942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.032 [2024-07-16 01:32:19.867953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.032 [2024-07-16 01:32:19.867959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.032 [2024-07-16 01:32:19.870572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.032 [2024-07-16 01:32:19.880012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.032 [2024-07-16 01:32:19.880405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.032 [2024-07-16 01:32:19.880421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.032 [2024-07-16 01:32:19.880427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.032 [2024-07-16 01:32:19.880586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.032 [2024-07-16 01:32:19.880744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.032 [2024-07-16 01:32:19.880753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.032 [2024-07-16 01:32:19.880759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.032 [2024-07-16 01:32:19.883408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.032 [2024-07-16 01:32:19.892737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.032 [2024-07-16 01:32:19.893148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.032 [2024-07-16 01:32:19.893164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.032 [2024-07-16 01:32:19.893170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.032 [2024-07-16 01:32:19.893329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.032 [2024-07-16 01:32:19.893517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.032 [2024-07-16 01:32:19.893526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.032 [2024-07-16 01:32:19.893532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.032 [2024-07-16 01:32:19.896123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.032 [2024-07-16 01:32:19.905456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.032 [2024-07-16 01:32:19.905843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.032 [2024-07-16 01:32:19.905859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.032 [2024-07-16 01:32:19.905865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.032 [2024-07-16 01:32:19.906023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.032 [2024-07-16 01:32:19.906181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.032 [2024-07-16 01:32:19.906190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.032 [2024-07-16 01:32:19.906196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.033 [2024-07-16 01:32:19.908813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.033 [2024-07-16 01:32:19.918278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.033 [2024-07-16 01:32:19.918705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.033 [2024-07-16 01:32:19.918748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.033 [2024-07-16 01:32:19.918769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.033 [2024-07-16 01:32:19.919142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.033 [2024-07-16 01:32:19.919301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.033 [2024-07-16 01:32:19.919310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.033 [2024-07-16 01:32:19.919315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.033 [2024-07-16 01:32:19.921933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.033 [2024-07-16 01:32:19.931092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.033 [2024-07-16 01:32:19.931481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.033 [2024-07-16 01:32:19.931498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.033 [2024-07-16 01:32:19.931504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.033 [2024-07-16 01:32:19.931663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.033 [2024-07-16 01:32:19.931821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.033 [2024-07-16 01:32:19.931830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.033 [2024-07-16 01:32:19.931836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.033 [2024-07-16 01:32:19.934450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.033 [2024-07-16 01:32:19.943900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.033 [2024-07-16 01:32:19.944239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.033 [2024-07-16 01:32:19.944255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.033 [2024-07-16 01:32:19.944262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.033 [2024-07-16 01:32:19.944444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.033 [2024-07-16 01:32:19.944612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.033 [2024-07-16 01:32:19.944622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.033 [2024-07-16 01:32:19.944628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.033 [2024-07-16 01:32:19.947208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.033 [2024-07-16 01:32:19.956670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.033 [2024-07-16 01:32:19.957062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.033 [2024-07-16 01:32:19.957078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.033 [2024-07-16 01:32:19.957084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.033 [2024-07-16 01:32:19.957245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.033 [2024-07-16 01:32:19.957427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.033 [2024-07-16 01:32:19.957437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.033 [2024-07-16 01:32:19.957443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.033 [2024-07-16 01:32:19.960031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.033 [2024-07-16 01:32:19.969491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.033 [2024-07-16 01:32:19.969928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.033 [2024-07-16 01:32:19.969971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.033 [2024-07-16 01:32:19.969993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.033 [2024-07-16 01:32:19.970495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.033 [2024-07-16 01:32:19.970663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.033 [2024-07-16 01:32:19.970671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.033 [2024-07-16 01:32:19.970677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.033 [2024-07-16 01:32:19.973257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.033 [2024-07-16 01:32:19.982279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.033 [2024-07-16 01:32:19.982693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.033 [2024-07-16 01:32:19.982710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.033 [2024-07-16 01:32:19.982716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.033 [2024-07-16 01:32:19.982874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.033 [2024-07-16 01:32:19.983032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.033 [2024-07-16 01:32:19.983041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.033 [2024-07-16 01:32:19.983046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.033 [2024-07-16 01:32:19.985662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.033 [2024-07-16 01:32:19.995131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.033 [2024-07-16 01:32:19.995550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.033 [2024-07-16 01:32:19.995565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.033 [2024-07-16 01:32:19.995572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.033 [2024-07-16 01:32:19.995730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.033 [2024-07-16 01:32:19.995889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.033 [2024-07-16 01:32:19.995897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.033 [2024-07-16 01:32:19.995907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.033 [2024-07-16 01:32:19.998517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.033 [2024-07-16 01:32:20.008112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.033 [2024-07-16 01:32:20.008585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.033 [2024-07-16 01:32:20.008602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.033 [2024-07-16 01:32:20.008609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.033 [2024-07-16 01:32:20.008781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.033 [2024-07-16 01:32:20.008955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.033 [2024-07-16 01:32:20.008964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.033 [2024-07-16 01:32:20.008970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.033 [2024-07-16 01:32:20.011713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.292 [2024-07-16 01:32:20.021097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.293 [2024-07-16 01:32:20.021442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.293 [2024-07-16 01:32:20.021460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.293 [2024-07-16 01:32:20.021468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.293 [2024-07-16 01:32:20.021652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.293 [2024-07-16 01:32:20.021835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.293 [2024-07-16 01:32:20.021845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.293 [2024-07-16 01:32:20.021852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.293 [2024-07-16 01:32:20.025886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.293 [2024-07-16 01:32:20.034207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.293 [2024-07-16 01:32:20.034623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.293 [2024-07-16 01:32:20.034641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.293 [2024-07-16 01:32:20.034650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.293 [2024-07-16 01:32:20.034819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.293 [2024-07-16 01:32:20.034989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.293 [2024-07-16 01:32:20.034999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.293 [2024-07-16 01:32:20.035008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.293 [2024-07-16 01:32:20.037684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.293 [2024-07-16 01:32:20.047174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.293 [2024-07-16 01:32:20.047562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.293 [2024-07-16 01:32:20.047578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.293 [2024-07-16 01:32:20.047586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.293 [2024-07-16 01:32:20.047755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.293 [2024-07-16 01:32:20.047923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.293 [2024-07-16 01:32:20.047934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.293 [2024-07-16 01:32:20.047940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.293 [2024-07-16 01:32:20.050604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.293 [2024-07-16 01:32:20.060124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.293 [2024-07-16 01:32:20.060474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.293 [2024-07-16 01:32:20.060491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.293 [2024-07-16 01:32:20.060499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.293 [2024-07-16 01:32:20.060666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.293 [2024-07-16 01:32:20.060834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.293 [2024-07-16 01:32:20.060843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.293 [2024-07-16 01:32:20.060850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.293 [2024-07-16 01:32:20.063522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.293 [2024-07-16 01:32:20.073142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.293 [2024-07-16 01:32:20.073506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.293 [2024-07-16 01:32:20.073523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.293 [2024-07-16 01:32:20.073530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.293 [2024-07-16 01:32:20.073697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.293 [2024-07-16 01:32:20.073865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.293 [2024-07-16 01:32:20.073874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.293 [2024-07-16 01:32:20.073881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.293 [2024-07-16 01:32:20.076556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.293 [2024-07-16 01:32:20.086152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.293 [2024-07-16 01:32:20.086579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.293 [2024-07-16 01:32:20.086595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.293 [2024-07-16 01:32:20.086602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.293 [2024-07-16 01:32:20.086770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.293 [2024-07-16 01:32:20.086940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.293 [2024-07-16 01:32:20.086949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.293 [2024-07-16 01:32:20.086955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.293 [2024-07-16 01:32:20.089700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.293 [2024-07-16 01:32:20.099152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.293 [2024-07-16 01:32:20.099486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.293 [2024-07-16 01:32:20.099502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.293 [2024-07-16 01:32:20.099509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.293 [2024-07-16 01:32:20.099675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.293 [2024-07-16 01:32:20.099843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.293 [2024-07-16 01:32:20.099852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.293 [2024-07-16 01:32:20.099858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.293 [2024-07-16 01:32:20.102525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.293 [2024-07-16 01:32:20.112104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.293 [2024-07-16 01:32:20.112531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.293 [2024-07-16 01:32:20.112548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.293 [2024-07-16 01:32:20.112555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.293 [2024-07-16 01:32:20.112723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.293 [2024-07-16 01:32:20.112891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.293 [2024-07-16 01:32:20.112900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.293 [2024-07-16 01:32:20.112906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.293 [2024-07-16 01:32:20.115576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.293 [2024-07-16 01:32:20.125011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.293 [2024-07-16 01:32:20.125418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.293 [2024-07-16 01:32:20.125434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.293 [2024-07-16 01:32:20.125442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.293 [2024-07-16 01:32:20.125613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.293 [2024-07-16 01:32:20.125772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.293 [2024-07-16 01:32:20.125781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.293 [2024-07-16 01:32:20.125786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.293 [2024-07-16 01:32:20.128447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.293 [2024-07-16 01:32:20.137933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.293 [2024-07-16 01:32:20.138354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.293 [2024-07-16 01:32:20.138370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.293 [2024-07-16 01:32:20.138378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.293 [2024-07-16 01:32:20.138552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.293 [2024-07-16 01:32:20.138712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.293 [2024-07-16 01:32:20.138721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.293 [2024-07-16 01:32:20.138727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.293 [2024-07-16 01:32:20.141381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.293 [2024-07-16 01:32:20.150937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.293 [2024-07-16 01:32:20.151357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.293 [2024-07-16 01:32:20.151373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.293 [2024-07-16 01:32:20.151381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.293 [2024-07-16 01:32:20.151547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.293 [2024-07-16 01:32:20.151714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.293 [2024-07-16 01:32:20.151723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.293 [2024-07-16 01:32:20.151730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.294 [2024-07-16 01:32:20.154394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.294 [2024-07-16 01:32:20.163799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.294 [2024-07-16 01:32:20.164221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.294 [2024-07-16 01:32:20.164236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.294 [2024-07-16 01:32:20.164243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.294 [2024-07-16 01:32:20.164415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.294 [2024-07-16 01:32:20.164583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.294 [2024-07-16 01:32:20.164592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.294 [2024-07-16 01:32:20.164599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.294 [2024-07-16 01:32:20.167257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.294 [2024-07-16 01:32:20.176681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.294 [2024-07-16 01:32:20.177096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.294 [2024-07-16 01:32:20.177112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.294 [2024-07-16 01:32:20.177122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.294 [2024-07-16 01:32:20.177288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.294 [2024-07-16 01:32:20.177460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.294 [2024-07-16 01:32:20.177469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.294 [2024-07-16 01:32:20.177475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.294 [2024-07-16 01:32:20.180142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.294 [2024-07-16 01:32:20.189556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.294 [2024-07-16 01:32:20.189959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.294 [2024-07-16 01:32:20.189976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.294 [2024-07-16 01:32:20.189983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.294 [2024-07-16 01:32:20.190149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.294 [2024-07-16 01:32:20.190316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.294 [2024-07-16 01:32:20.190325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.294 [2024-07-16 01:32:20.190332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.294 [2024-07-16 01:32:20.193006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3537342 Killed "${NVMF_APP[@]}" "$@" 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:54.294 [2024-07-16 01:32:20.202488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.294 [2024-07-16 01:32:20.202920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.294 [2024-07-16 01:32:20.202937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.294 [2024-07-16 01:32:20.202945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.294 [2024-07-16 01:32:20.203116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.294 [2024-07-16 01:32:20.203289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.294 [2024-07-16 01:32:20.203298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.294 [2024-07-16 01:32:20.203305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.294 [2024-07-16 01:32:20.206052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3538751 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3538751 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3538751 ']' 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.294 01:32:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.294 [2024-07-16 01:32:20.215457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.294 [2024-07-16 01:32:20.215817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.294 [2024-07-16 01:32:20.215834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.294 [2024-07-16 01:32:20.215843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.294 [2024-07-16 01:32:20.216016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.294 [2024-07-16 01:32:20.216189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.294 [2024-07-16 01:32:20.216199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.294 [2024-07-16 01:32:20.216208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.294 [2024-07-16 01:32:20.219144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.294 [2024-07-16 01:32:20.228548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.294 [2024-07-16 01:32:20.228849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.294 [2024-07-16 01:32:20.228866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.294 [2024-07-16 01:32:20.228874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.294 [2024-07-16 01:32:20.229047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.294 [2024-07-16 01:32:20.229220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.294 [2024-07-16 01:32:20.229228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.294 [2024-07-16 01:32:20.229235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.294 [2024-07-16 01:32:20.231984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.294 [2024-07-16 01:32:20.241519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.294 [2024-07-16 01:32:20.241867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.294 [2024-07-16 01:32:20.241884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.294 [2024-07-16 01:32:20.241891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.294 [2024-07-16 01:32:20.242059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.294 [2024-07-16 01:32:20.242227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.294 [2024-07-16 01:32:20.242240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.294 [2024-07-16 01:32:20.242247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.294 [2024-07-16 01:32:20.244920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.294 [2024-07-16 01:32:20.253873] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:26:54.294 [2024-07-16 01:32:20.253915] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.294 [2024-07-16 01:32:20.254510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.294 [2024-07-16 01:32:20.254936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.294 [2024-07-16 01:32:20.254953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.294 [2024-07-16 01:32:20.254960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.294 [2024-07-16 01:32:20.255128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.294 [2024-07-16 01:32:20.255296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.294 [2024-07-16 01:32:20.255305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.294 [2024-07-16 01:32:20.255312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.294 [2024-07-16 01:32:20.257978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.294 [2024-07-16 01:32:20.267394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.294 [2024-07-16 01:32:20.267823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.294 [2024-07-16 01:32:20.267840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.294 [2024-07-16 01:32:20.267847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.294 [2024-07-16 01:32:20.268014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.294 [2024-07-16 01:32:20.268184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.294 [2024-07-16 01:32:20.268193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.294 [2024-07-16 01:32:20.268199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.294 [2024-07-16 01:32:20.271012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.554 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.554 [2024-07-16 01:32:20.280471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.554 [2024-07-16 01:32:20.280848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.554 [2024-07-16 01:32:20.280865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.554 [2024-07-16 01:32:20.280873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.554 [2024-07-16 01:32:20.281039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.554 [2024-07-16 01:32:20.281207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.554 [2024-07-16 01:32:20.281220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.554 [2024-07-16 01:32:20.281227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.554 [2024-07-16 01:32:20.283966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.554 [2024-07-16 01:32:20.293469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.554 [2024-07-16 01:32:20.293899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.554 [2024-07-16 01:32:20.293916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.554 [2024-07-16 01:32:20.293923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.554 [2024-07-16 01:32:20.294090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.554 [2024-07-16 01:32:20.294258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.554 [2024-07-16 01:32:20.294267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.554 [2024-07-16 01:32:20.294274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.554 [2024-07-16 01:32:20.297007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.554 [2024-07-16 01:32:20.306451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.554 [2024-07-16 01:32:20.306856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.554 [2024-07-16 01:32:20.306872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.554 [2024-07-16 01:32:20.306879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.554 [2024-07-16 01:32:20.307046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.554 [2024-07-16 01:32:20.307214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.554 [2024-07-16 01:32:20.307223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.554 [2024-07-16 01:32:20.307229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.554 [2024-07-16 01:32:20.309896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.554 [2024-07-16 01:32:20.315568] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:54.554 [2024-07-16 01:32:20.319342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.554 [2024-07-16 01:32:20.319768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.554 [2024-07-16 01:32:20.319784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.554 [2024-07-16 01:32:20.319792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.554 [2024-07-16 01:32:20.319960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.554 [2024-07-16 01:32:20.320128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.554 [2024-07-16 01:32:20.320138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.554 [2024-07-16 01:32:20.320145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.554 [2024-07-16 01:32:20.322812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.554 [2024-07-16 01:32:20.332263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.554 [2024-07-16 01:32:20.332621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.554 [2024-07-16 01:32:20.332638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.554 [2024-07-16 01:32:20.332647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.554 [2024-07-16 01:32:20.332814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.554 [2024-07-16 01:32:20.332981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.554 [2024-07-16 01:32:20.332991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.554 [2024-07-16 01:32:20.332998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.554 [2024-07-16 01:32:20.335667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.554 [2024-07-16 01:32:20.345267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.555 [2024-07-16 01:32:20.345622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.555 [2024-07-16 01:32:20.345639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.555 [2024-07-16 01:32:20.345647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.555 [2024-07-16 01:32:20.345814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.555 [2024-07-16 01:32:20.345983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.555 [2024-07-16 01:32:20.345993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.555 [2024-07-16 01:32:20.345999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.555 [2024-07-16 01:32:20.348672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.555 [2024-07-16 01:32:20.358274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.555 [2024-07-16 01:32:20.358627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.555 [2024-07-16 01:32:20.358647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.555 [2024-07-16 01:32:20.358655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.555 [2024-07-16 01:32:20.358824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.555 [2024-07-16 01:32:20.358992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.555 [2024-07-16 01:32:20.359001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.555 [2024-07-16 01:32:20.359009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.555 [2024-07-16 01:32:20.361677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.555 [2024-07-16 01:32:20.371292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.555 [2024-07-16 01:32:20.371730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.555 [2024-07-16 01:32:20.371747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.555 [2024-07-16 01:32:20.371755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.555 [2024-07-16 01:32:20.371928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.555 [2024-07-16 01:32:20.372097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.555 [2024-07-16 01:32:20.372106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.555 [2024-07-16 01:32:20.372113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.555 [2024-07-16 01:32:20.374782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.555 [2024-07-16 01:32:20.384223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.555 [2024-07-16 01:32:20.384586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.555 [2024-07-16 01:32:20.384604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.555 [2024-07-16 01:32:20.384611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.555 [2024-07-16 01:32:20.384778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.555 [2024-07-16 01:32:20.384945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.555 [2024-07-16 01:32:20.384954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.555 [2024-07-16 01:32:20.384960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.555 [2024-07-16 01:32:20.387633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.555 [2024-07-16 01:32:20.393692] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.555 [2024-07-16 01:32:20.393721] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.555 [2024-07-16 01:32:20.393728] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.555 [2024-07-16 01:32:20.393734] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.555 [2024-07-16 01:32:20.393740] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.555 [2024-07-16 01:32:20.393784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.555 [2024-07-16 01:32:20.393803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.555 [2024-07-16 01:32:20.393806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.555 [2024-07-16 01:32:20.397349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.555 [2024-07-16 01:32:20.397657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.555 [2024-07-16 01:32:20.397675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.555 [2024-07-16 01:32:20.397682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.555 [2024-07-16 01:32:20.397853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.555 [2024-07-16 01:32:20.398025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.555 [2024-07-16 01:32:20.398033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.555 [2024-07-16 01:32:20.398041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.555 [2024-07-16 01:32:20.400791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.555 [2024-07-16 01:32:20.410343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.555 [2024-07-16 01:32:20.410793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.555 [2024-07-16 01:32:20.410811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.555 [2024-07-16 01:32:20.410819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.555 [2024-07-16 01:32:20.410991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.555 [2024-07-16 01:32:20.411163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.555 [2024-07-16 01:32:20.411172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.555 [2024-07-16 01:32:20.411179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.555 [2024-07-16 01:32:20.413922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.555 [2024-07-16 01:32:20.423308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.555 [2024-07-16 01:32:20.423675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.555 [2024-07-16 01:32:20.423693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.555 [2024-07-16 01:32:20.423701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.555 [2024-07-16 01:32:20.423873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.555 [2024-07-16 01:32:20.424046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.555 [2024-07-16 01:32:20.424054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.555 [2024-07-16 01:32:20.424061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.555 [2024-07-16 01:32:20.426806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.555 [2024-07-16 01:32:20.436393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.555 [2024-07-16 01:32:20.436829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.555 [2024-07-16 01:32:20.436846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.555 [2024-07-16 01:32:20.436854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.555 [2024-07-16 01:32:20.437026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.555 [2024-07-16 01:32:20.437198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.555 [2024-07-16 01:32:20.437206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.555 [2024-07-16 01:32:20.437214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.555 [2024-07-16 01:32:20.439963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.555 [2024-07-16 01:32:20.449367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.555 [2024-07-16 01:32:20.449814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.555 [2024-07-16 01:32:20.449831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.555 [2024-07-16 01:32:20.449838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.555 [2024-07-16 01:32:20.450017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.555 [2024-07-16 01:32:20.450190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.555 [2024-07-16 01:32:20.450198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.555 [2024-07-16 01:32:20.450206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.555 [2024-07-16 01:32:20.452953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.555 [2024-07-16 01:32:20.462344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.555 [2024-07-16 01:32:20.462776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.555 [2024-07-16 01:32:20.462791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.555 [2024-07-16 01:32:20.462799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.555 [2024-07-16 01:32:20.462971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.555 [2024-07-16 01:32:20.463143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.555 [2024-07-16 01:32:20.463151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.555 [2024-07-16 01:32:20.463158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.555 [2024-07-16 01:32:20.465907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.555 [2024-07-16 01:32:20.475293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.555 [2024-07-16 01:32:20.475731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.556 [2024-07-16 01:32:20.475747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.556 [2024-07-16 01:32:20.475754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.556 [2024-07-16 01:32:20.475925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.556 [2024-07-16 01:32:20.476097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.556 [2024-07-16 01:32:20.476105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.556 [2024-07-16 01:32:20.476112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.556 [2024-07-16 01:32:20.478853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.556 [2024-07-16 01:32:20.488252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.556 [2024-07-16 01:32:20.488684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.556 [2024-07-16 01:32:20.488699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.556 [2024-07-16 01:32:20.488706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.556 [2024-07-16 01:32:20.488877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.556 [2024-07-16 01:32:20.489050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.556 [2024-07-16 01:32:20.489058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.556 [2024-07-16 01:32:20.489070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.556 [2024-07-16 01:32:20.491819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.556 [2024-07-16 01:32:20.501357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.556 [2024-07-16 01:32:20.501785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.556 [2024-07-16 01:32:20.501801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.556 [2024-07-16 01:32:20.501807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.556 [2024-07-16 01:32:20.501979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.556 [2024-07-16 01:32:20.502150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.556 [2024-07-16 01:32:20.502158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.556 [2024-07-16 01:32:20.502165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.556 [2024-07-16 01:32:20.504906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.556 [2024-07-16 01:32:20.514451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.556 [2024-07-16 01:32:20.514862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.556 [2024-07-16 01:32:20.514877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.556 [2024-07-16 01:32:20.514884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.556 [2024-07-16 01:32:20.515055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.556 [2024-07-16 01:32:20.515227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.556 [2024-07-16 01:32:20.515235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.556 [2024-07-16 01:32:20.515242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.556 [2024-07-16 01:32:20.517989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.556 [2024-07-16 01:32:20.527550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.556 [2024-07-16 01:32:20.527960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.556 [2024-07-16 01:32:20.527975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.556 [2024-07-16 01:32:20.527982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.556 [2024-07-16 01:32:20.528153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.556 [2024-07-16 01:32:20.528326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.556 [2024-07-16 01:32:20.528334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.556 [2024-07-16 01:32:20.528345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.556 [2024-07-16 01:32:20.531093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.815 [2024-07-16 01:32:20.540506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.815 [2024-07-16 01:32:20.540943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.815 [2024-07-16 01:32:20.540957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.815 [2024-07-16 01:32:20.540964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.815 [2024-07-16 01:32:20.541136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.815 [2024-07-16 01:32:20.541308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.815 [2024-07-16 01:32:20.541316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.815 [2024-07-16 01:32:20.541323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.815 [2024-07-16 01:32:20.544076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.815 [2024-07-16 01:32:20.553490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.815 [2024-07-16 01:32:20.553864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.815 [2024-07-16 01:32:20.553880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.815 [2024-07-16 01:32:20.553887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.815 [2024-07-16 01:32:20.554058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.816 [2024-07-16 01:32:20.554229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.816 [2024-07-16 01:32:20.554237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.816 [2024-07-16 01:32:20.554244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.816 [2024-07-16 01:32:20.556988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.816 [2024-07-16 01:32:20.566553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.816 [2024-07-16 01:32:20.566940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.816 [2024-07-16 01:32:20.566956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.816 [2024-07-16 01:32:20.566963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.816 [2024-07-16 01:32:20.567135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.816 [2024-07-16 01:32:20.567307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.816 [2024-07-16 01:32:20.567316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.816 [2024-07-16 01:32:20.567322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.816 [2024-07-16 01:32:20.570071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.816 [2024-07-16 01:32:20.579637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.816 [2024-07-16 01:32:20.580003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.816 [2024-07-16 01:32:20.580018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.816 [2024-07-16 01:32:20.580025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.816 [2024-07-16 01:32:20.580197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.816 [2024-07-16 01:32:20.580377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.816 [2024-07-16 01:32:20.580386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.816 [2024-07-16 01:32:20.580393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.816 [2024-07-16 01:32:20.583135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.816 [2024-07-16 01:32:20.592697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.816 [2024-07-16 01:32:20.593121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.816 [2024-07-16 01:32:20.593137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.816 [2024-07-16 01:32:20.593144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.816 [2024-07-16 01:32:20.593314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.816 [2024-07-16 01:32:20.593491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.816 [2024-07-16 01:32:20.593499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.816 [2024-07-16 01:32:20.593506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.816 [2024-07-16 01:32:20.596244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.816 [2024-07-16 01:32:20.605644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.816 [2024-07-16 01:32:20.605941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.816 [2024-07-16 01:32:20.605958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.816 [2024-07-16 01:32:20.605964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.816 [2024-07-16 01:32:20.606136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.816 [2024-07-16 01:32:20.606309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.816 [2024-07-16 01:32:20.606316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.816 [2024-07-16 01:32:20.606323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.816 [2024-07-16 01:32:20.609072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.816 [2024-07-16 01:32:20.618640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.816 [2024-07-16 01:32:20.619000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.816 [2024-07-16 01:32:20.619014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.816 [2024-07-16 01:32:20.619021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.816 [2024-07-16 01:32:20.619192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.816 [2024-07-16 01:32:20.619367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.816 [2024-07-16 01:32:20.619375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.816 [2024-07-16 01:32:20.619381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.816 [2024-07-16 01:32:20.622133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.816 [2024-07-16 01:32:20.631696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.816 [2024-07-16 01:32:20.632034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.816 [2024-07-16 01:32:20.632050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.816 [2024-07-16 01:32:20.632057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.816 [2024-07-16 01:32:20.632229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.816 [2024-07-16 01:32:20.632405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.816 [2024-07-16 01:32:20.632413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.816 [2024-07-16 01:32:20.632420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.816 [2024-07-16 01:32:20.635162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.816 [2024-07-16 01:32:20.644737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.816 [2024-07-16 01:32:20.645036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.816 [2024-07-16 01:32:20.645051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.816 [2024-07-16 01:32:20.645058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.816 [2024-07-16 01:32:20.645230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.816 [2024-07-16 01:32:20.645407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.816 [2024-07-16 01:32:20.645415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.816 [2024-07-16 01:32:20.645422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.816 [2024-07-16 01:32:20.648159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.816 [2024-07-16 01:32:20.657712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.816 [2024-07-16 01:32:20.657991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.816 [2024-07-16 01:32:20.658007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.816 [2024-07-16 01:32:20.658014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.816 [2024-07-16 01:32:20.658186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.816 [2024-07-16 01:32:20.658363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.816 [2024-07-16 01:32:20.658371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.816 [2024-07-16 01:32:20.658377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.816 [2024-07-16 01:32:20.661123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.816 [2024-07-16 01:32:20.670674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.816 [2024-07-16 01:32:20.671041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.816 [2024-07-16 01:32:20.671058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.816 [2024-07-16 01:32:20.671068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.816 [2024-07-16 01:32:20.671240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.816 [2024-07-16 01:32:20.671415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.816 [2024-07-16 01:32:20.671424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.816 [2024-07-16 01:32:20.671430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.816 [2024-07-16 01:32:20.674171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.816 [2024-07-16 01:32:20.683722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.816 [2024-07-16 01:32:20.684011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.816 [2024-07-16 01:32:20.684027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.816 [2024-07-16 01:32:20.684033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.816 [2024-07-16 01:32:20.684205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.816 [2024-07-16 01:32:20.684383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.816 [2024-07-16 01:32:20.684391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.816 [2024-07-16 01:32:20.684397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.816 [2024-07-16 01:32:20.687132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.816 [2024-07-16 01:32:20.696683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.816 [2024-07-16 01:32:20.696966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.816 [2024-07-16 01:32:20.696981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.817 [2024-07-16 01:32:20.696988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.817 [2024-07-16 01:32:20.697159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.817 [2024-07-16 01:32:20.697331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.817 [2024-07-16 01:32:20.697345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.817 [2024-07-16 01:32:20.697352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.817 [2024-07-16 01:32:20.700098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.817 [2024-07-16 01:32:20.709651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.817 [2024-07-16 01:32:20.709988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.817 [2024-07-16 01:32:20.710003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.817 [2024-07-16 01:32:20.710010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.817 [2024-07-16 01:32:20.710181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.817 [2024-07-16 01:32:20.710358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.817 [2024-07-16 01:32:20.710369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.817 [2024-07-16 01:32:20.710375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.817 [2024-07-16 01:32:20.713117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.817 [2024-07-16 01:32:20.722682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.817 [2024-07-16 01:32:20.723109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.817 [2024-07-16 01:32:20.723124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.817 [2024-07-16 01:32:20.723131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.817 [2024-07-16 01:32:20.723302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.817 [2024-07-16 01:32:20.723479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.817 [2024-07-16 01:32:20.723488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.817 [2024-07-16 01:32:20.723493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.817 [2024-07-16 01:32:20.726238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.817 [2024-07-16 01:32:20.735790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.817 [2024-07-16 01:32:20.736075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.817 [2024-07-16 01:32:20.736090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.817 [2024-07-16 01:32:20.736097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.817 [2024-07-16 01:32:20.736269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.817 [2024-07-16 01:32:20.736445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.817 [2024-07-16 01:32:20.736454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.817 [2024-07-16 01:32:20.736460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.817 [2024-07-16 01:32:20.739206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.817 [2024-07-16 01:32:20.748779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.817 [2024-07-16 01:32:20.749051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.817 [2024-07-16 01:32:20.749067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.817 [2024-07-16 01:32:20.749074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.817 [2024-07-16 01:32:20.749245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.817 [2024-07-16 01:32:20.749421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.817 [2024-07-16 01:32:20.749430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.817 [2024-07-16 01:32:20.749436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.817 [2024-07-16 01:32:20.752181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.817 [2024-07-16 01:32:20.761746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.817 [2024-07-16 01:32:20.762093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.817 [2024-07-16 01:32:20.762109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.817 [2024-07-16 01:32:20.762117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.817 [2024-07-16 01:32:20.762288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.817 [2024-07-16 01:32:20.762464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.817 [2024-07-16 01:32:20.762473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.817 [2024-07-16 01:32:20.762479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.817 [2024-07-16 01:32:20.765223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.817 [2024-07-16 01:32:20.774774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.817 [2024-07-16 01:32:20.775119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.817 [2024-07-16 01:32:20.775135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.817 [2024-07-16 01:32:20.775142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.817 [2024-07-16 01:32:20.775313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.817 [2024-07-16 01:32:20.775489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.817 [2024-07-16 01:32:20.775498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.817 [2024-07-16 01:32:20.775504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.817 [2024-07-16 01:32:20.778250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.817 [2024-07-16 01:32:20.787795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.817 [2024-07-16 01:32:20.788073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.817 [2024-07-16 01:32:20.788089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.817 [2024-07-16 01:32:20.788095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.817 [2024-07-16 01:32:20.788267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.817 [2024-07-16 01:32:20.788443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.817 [2024-07-16 01:32:20.788451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.817 [2024-07-16 01:32:20.788457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:54.817 [2024-07-16 01:32:20.791197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:54.817 [2024-07-16 01:32:20.800767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:54.817 [2024-07-16 01:32:20.801132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.817 [2024-07-16 01:32:20.801148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:54.817 [2024-07-16 01:32:20.801154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:54.817 [2024-07-16 01:32:20.801329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:54.817 [2024-07-16 01:32:20.801508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:54.817 [2024-07-16 01:32:20.801516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:54.817 [2024-07-16 01:32:20.801522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.076 [2024-07-16 01:32:20.804268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.076 [2024-07-16 01:32:20.813820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.076 [2024-07-16 01:32:20.814102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.076 [2024-07-16 01:32:20.814117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.076 [2024-07-16 01:32:20.814124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.076 [2024-07-16 01:32:20.814295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.076 [2024-07-16 01:32:20.814470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.076 [2024-07-16 01:32:20.814479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.076 [2024-07-16 01:32:20.814485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.076 [2024-07-16 01:32:20.817214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.076 [2024-07-16 01:32:20.826762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.076 [2024-07-16 01:32:20.827049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.076 [2024-07-16 01:32:20.827064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.076 [2024-07-16 01:32:20.827071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.076 [2024-07-16 01:32:20.827242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.076 [2024-07-16 01:32:20.827420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.076 [2024-07-16 01:32:20.827429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.076 [2024-07-16 01:32:20.827435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.076 [2024-07-16 01:32:20.830173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.076 [2024-07-16 01:32:20.839729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.076 [2024-07-16 01:32:20.840010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.076 [2024-07-16 01:32:20.840026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.076 [2024-07-16 01:32:20.840033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.840205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.077 [2024-07-16 01:32:20.840382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.077 [2024-07-16 01:32:20.840390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.077 [2024-07-16 01:32:20.840399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.077 [2024-07-16 01:32:20.843144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.077 [2024-07-16 01:32:20.852689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.077 [2024-07-16 01:32:20.853019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.077 [2024-07-16 01:32:20.853035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.077 [2024-07-16 01:32:20.853041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.853212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.077 [2024-07-16 01:32:20.853388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.077 [2024-07-16 01:32:20.853397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.077 [2024-07-16 01:32:20.853403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.077 [2024-07-16 01:32:20.856146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.077 [2024-07-16 01:32:20.865703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.077 [2024-07-16 01:32:20.866077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.077 [2024-07-16 01:32:20.866093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.077 [2024-07-16 01:32:20.866099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.866270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.077 [2024-07-16 01:32:20.866446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.077 [2024-07-16 01:32:20.866455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.077 [2024-07-16 01:32:20.866461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.077 [2024-07-16 01:32:20.869208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.077 [2024-07-16 01:32:20.878771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.077 [2024-07-16 01:32:20.879058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.077 [2024-07-16 01:32:20.879074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.077 [2024-07-16 01:32:20.879080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.879252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.077 [2024-07-16 01:32:20.879429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.077 [2024-07-16 01:32:20.879437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.077 [2024-07-16 01:32:20.879444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.077 [2024-07-16 01:32:20.882187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.077 [2024-07-16 01:32:20.891756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.077 [2024-07-16 01:32:20.892096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.077 [2024-07-16 01:32:20.892111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.077 [2024-07-16 01:32:20.892117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.892288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.077 [2024-07-16 01:32:20.892466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.077 [2024-07-16 01:32:20.892474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.077 [2024-07-16 01:32:20.892480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.077 [2024-07-16 01:32:20.895217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.077 [2024-07-16 01:32:20.904774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.077 [2024-07-16 01:32:20.905047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.077 [2024-07-16 01:32:20.905063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.077 [2024-07-16 01:32:20.905069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.905240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.077 [2024-07-16 01:32:20.905416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.077 [2024-07-16 01:32:20.905425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.077 [2024-07-16 01:32:20.905431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.077 [2024-07-16 01:32:20.908175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.077 [2024-07-16 01:32:20.917719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.077 [2024-07-16 01:32:20.917992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.077 [2024-07-16 01:32:20.918008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.077 [2024-07-16 01:32:20.918014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.918186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.077 [2024-07-16 01:32:20.918364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.077 [2024-07-16 01:32:20.918372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.077 [2024-07-16 01:32:20.918379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.077 [2024-07-16 01:32:20.921117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.077 [2024-07-16 01:32:20.930694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.077 [2024-07-16 01:32:20.930997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.077 [2024-07-16 01:32:20.931012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.077 [2024-07-16 01:32:20.931019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.931190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.077 [2024-07-16 01:32:20.931371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.077 [2024-07-16 01:32:20.931380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.077 [2024-07-16 01:32:20.931386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.077 [2024-07-16 01:32:20.934130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.077 [2024-07-16 01:32:20.943692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.077 [2024-07-16 01:32:20.944052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.077 [2024-07-16 01:32:20.944068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.077 [2024-07-16 01:32:20.944075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.944247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.077 [2024-07-16 01:32:20.944425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.077 [2024-07-16 01:32:20.944433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.077 [2024-07-16 01:32:20.944440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.077 [2024-07-16 01:32:20.947177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.077 [2024-07-16 01:32:20.956744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.077 [2024-07-16 01:32:20.957102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.077 [2024-07-16 01:32:20.957117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.077 [2024-07-16 01:32:20.957123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.957295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.077 [2024-07-16 01:32:20.957471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.077 [2024-07-16 01:32:20.957480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.077 [2024-07-16 01:32:20.957486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.077 [2024-07-16 01:32:20.960228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.077 [2024-07-16 01:32:20.969796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.077 [2024-07-16 01:32:20.970229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.077 [2024-07-16 01:32:20.970245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.077 [2024-07-16 01:32:20.970252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.970427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.077 [2024-07-16 01:32:20.970599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.077 [2024-07-16 01:32:20.970607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.077 [2024-07-16 01:32:20.970614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.077 [2024-07-16 01:32:20.973357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.077 [2024-07-16 01:32:20.982745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.077 [2024-07-16 01:32:20.983154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.077 [2024-07-16 01:32:20.983170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.077 [2024-07-16 01:32:20.983177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.077 [2024-07-16 01:32:20.983352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.078 [2024-07-16 01:32:20.983525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.078 [2024-07-16 01:32:20.983533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.078 [2024-07-16 01:32:20.983540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.078 [2024-07-16 01:32:20.986280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.078 [2024-07-16 01:32:20.995836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.078 [2024-07-16 01:32:20.996267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.078 [2024-07-16 01:32:20.996283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.078 [2024-07-16 01:32:20.996290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.078 [2024-07-16 01:32:20.996465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.078 [2024-07-16 01:32:20.996639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.078 [2024-07-16 01:32:20.996648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.078 [2024-07-16 01:32:20.996655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.078 [2024-07-16 01:32:20.999398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.078 [2024-07-16 01:32:21.008778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.078 [2024-07-16 01:32:21.009207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.078 [2024-07-16 01:32:21.009223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.078 [2024-07-16 01:32:21.009230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.078 [2024-07-16 01:32:21.009406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.078 [2024-07-16 01:32:21.009578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.078 [2024-07-16 01:32:21.009588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.078 [2024-07-16 01:32:21.009595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.078 [2024-07-16 01:32:21.012335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.078 [2024-07-16 01:32:21.021723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.078 [2024-07-16 01:32:21.022150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.078 [2024-07-16 01:32:21.022166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.078 [2024-07-16 01:32:21.022176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.078 [2024-07-16 01:32:21.022351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.078 [2024-07-16 01:32:21.022524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.078 [2024-07-16 01:32:21.022533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.078 [2024-07-16 01:32:21.022540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.078 [2024-07-16 01:32:21.025277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.078 [2024-07-16 01:32:21.034810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.078 [2024-07-16 01:32:21.035159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.078 [2024-07-16 01:32:21.035174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.078 [2024-07-16 01:32:21.035181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.078 [2024-07-16 01:32:21.035355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.078 [2024-07-16 01:32:21.035528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.078 [2024-07-16 01:32:21.035536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.078 [2024-07-16 01:32:21.035542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.078 [2024-07-16 01:32:21.038279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.078 [2024-07-16 01:32:21.047833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.078 [2024-07-16 01:32:21.048261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.078 [2024-07-16 01:32:21.048276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.078 [2024-07-16 01:32:21.048283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.078 [2024-07-16 01:32:21.048458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.078 [2024-07-16 01:32:21.048630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.078 [2024-07-16 01:32:21.048638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.078 [2024-07-16 01:32:21.048644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.078 [2024-07-16 01:32:21.051393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.078 [2024-07-16 01:32:21.060782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.078 [2024-07-16 01:32:21.061195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.078 [2024-07-16 01:32:21.061210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.078 [2024-07-16 01:32:21.061217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.078 [2024-07-16 01:32:21.061392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.078 [2024-07-16 01:32:21.061565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.078 [2024-07-16 01:32:21.061575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.078 [2024-07-16 01:32:21.061581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.337 [2024-07-16 01:32:21.064332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.337 [2024-07-16 01:32:21.073880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.337 [2024-07-16 01:32:21.074290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-16 01:32:21.074306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.337 [2024-07-16 01:32:21.074314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.337 [2024-07-16 01:32:21.074491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.337 [2024-07-16 01:32:21.074662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.337 [2024-07-16 01:32:21.074670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.337 [2024-07-16 01:32:21.074676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.337 [2024-07-16 01:32:21.077417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.337 [2024-07-16 01:32:21.086979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.337 [2024-07-16 01:32:21.087388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-16 01:32:21.087405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.337 [2024-07-16 01:32:21.087414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.337 [2024-07-16 01:32:21.087587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.337 [2024-07-16 01:32:21.087761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.337 [2024-07-16 01:32:21.087769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.337 [2024-07-16 01:32:21.087775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.337 [2024-07-16 01:32:21.090516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.337 [2024-07-16 01:32:21.100053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.337 [2024-07-16 01:32:21.100385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-16 01:32:21.100401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.337 [2024-07-16 01:32:21.100407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.337 [2024-07-16 01:32:21.100578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.337 [2024-07-16 01:32:21.100750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.337 [2024-07-16 01:32:21.100761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.337 [2024-07-16 01:32:21.100767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.337 [2024-07-16 01:32:21.103509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.337 [2024-07-16 01:32:21.109014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.337 [2024-07-16 01:32:21.113043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.337 [2024-07-16 01:32:21.113374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-16 01:32:21.113391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.337 [2024-07-16 01:32:21.113397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.337 [2024-07-16 01:32:21.113568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.337 [2024-07-16 01:32:21.113740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.337 [2024-07-16 01:32:21.113748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.337 [2024-07-16 01:32:21.113754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.337 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.337 [2024-07-16 01:32:21.116499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.337 [2024-07-16 01:32:21.126042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.337 [2024-07-16 01:32:21.126327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-16 01:32:21.126347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.337 [2024-07-16 01:32:21.126354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.337 [2024-07-16 01:32:21.126524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.337 [2024-07-16 01:32:21.126696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.337 [2024-07-16 01:32:21.126704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.337 [2024-07-16 01:32:21.126710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.337 [2024-07-16 01:32:21.129447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.337 [2024-07-16 01:32:21.138984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.337 [2024-07-16 01:32:21.139386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-16 01:32:21.139401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.337 [2024-07-16 01:32:21.139413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.337 [2024-07-16 01:32:21.139584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.337 [2024-07-16 01:32:21.139756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.337 [2024-07-16 01:32:21.139764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.337 [2024-07-16 01:32:21.139769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.337 [2024-07-16 01:32:21.142525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.337 [2024-07-16 01:32:21.151929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.337 [2024-07-16 01:32:21.152369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-16 01:32:21.152388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.337 [2024-07-16 01:32:21.152395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.337 [2024-07-16 01:32:21.152568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.337 [2024-07-16 01:32:21.152740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.338 [2024-07-16 01:32:21.152748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.338 [2024-07-16 01:32:21.152755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.338 Malloc0 00:26:55.338 [2024-07-16 01:32:21.155499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.338 [2024-07-16 01:32:21.164888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.338 [2024-07-16 01:32:21.165328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-16 01:32:21.165347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.338 [2024-07-16 01:32:21.165355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.338 [2024-07-16 01:32:21.165525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.338 [2024-07-16 01:32:21.165698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.338 [2024-07-16 01:32:21.165706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.338 [2024-07-16 01:32:21.165712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.338 [2024-07-16 01:32:21.168462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.338 [2024-07-16 01:32:21.177855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.338 [2024-07-16 01:32:21.178291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-16 01:32:21.178306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5dad0 with addr=10.0.0.2, port=4420 00:26:55.338 [2024-07-16 01:32:21.178313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dad0 is same with the state(5) to be set 00:26:55.338 [2024-07-16 01:32:21.178488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5dad0 (9): Bad file descriptor 00:26:55.338 [2024-07-16 01:32:21.178660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.338 [2024-07-16 01:32:21.178668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.338 [2024-07-16 01:32:21.178674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.338 [2024-07-16 01:32:21.178811] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.338 [2024-07-16 01:32:21.181417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.338 01:32:21 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3537822 00:26:55.338 [2024-07-16 01:32:21.190810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.338 [2024-07-16 01:32:21.222211] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:05.302 00:27:05.302 Latency(us) 00:27:05.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.302 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:05.302 Verification LBA range: start 0x0 length 0x4000 00:27:05.302 Nvme1n1 : 15.01 8241.36 32.19 12987.56 0.00 6010.39 651.46 19972.88 00:27:05.302 =================================================================================================================== 00:27:05.302 Total : 8241.36 32.19 12987.56 0.00 6010.39 651.46 19972.88 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:05.302 rmmod nvme_tcp 00:27:05.302 rmmod nvme_fabrics 00:27:05.302 rmmod nvme_keyring 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3538751 ']' 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3538751 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3538751 ']' 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3538751 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:05.302 01:32:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3538751 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3538751' 00:27:05.302 killing process with pid 3538751 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3538751 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3538751 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.302 01:32:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.676 01:32:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:06.676 00:27:06.676 real 0m26.107s 00:27:06.676 user 1m2.823s 00:27:06.676 sys 0m6.195s 00:27:06.676 01:32:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:06.676 01:32:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.676 ************************************ 00:27:06.676 END TEST nvmf_bdevperf 00:27:06.676 ************************************ 00:27:06.676 01:32:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:06.677 01:32:32 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:06.677 01:32:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:06.677 01:32:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:06.677 01:32:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:06.677 ************************************ 00:27:06.677 START TEST nvmf_target_disconnect 00:27:06.677 ************************************ 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:06.677 * Looking for test storage... 00:27:06.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.677 01:32:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:11.933 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.933 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.933 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.933 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:11.934 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:11.934 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:11.934 Found net devices under 0000:86:00.0: cvl_0_0 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:11.934 Found net devices under 0000:86:00.1: cvl_0_1 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:11.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:27:11.934 00:27:11.934 --- 10.0.0.2 ping statistics --- 00:27:11.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.934 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:27:11.934 00:27:11.934 --- 10.0.0.1 ping statistics --- 00:27:11.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.934 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:11.934 ************************************ 00:27:11.934 START TEST nvmf_target_disconnect_tc1 00:27:11.934 ************************************ 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:11.934 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:11.935 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.935 [2024-07-16 01:32:37.664372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:11.935 [2024-07-16 01:32:37.664470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aefed0 with addr=10.0.0.2, port=4420 00:27:11.935 [2024-07-16 01:32:37.664524] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:11.935 [2024-07-16 01:32:37.664558] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:11.935 [2024-07-16 01:32:37.664576] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:11.935 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:11.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:11.935 Initializing NVMe Controllers 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:11.935 00:27:11.935 real 0m0.095s 00:27:11.935 user 0m0.039s 00:27:11.935 sys 0m0.053s 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:11.935 ************************************ 00:27:11.935 END TEST nvmf_target_disconnect_tc1 00:27:11.935 ************************************ 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:11.935 ************************************ 00:27:11.935 START TEST nvmf_target_disconnect_tc2 00:27:11.935 ************************************ 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3543699 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3543699 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3543699 ']' 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:11.935 01:32:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:11.935 [2024-07-16 01:32:37.782227] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:27:11.935 [2024-07-16 01:32:37.782272] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.935 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.935 [2024-07-16 01:32:37.849773] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:12.193 [2024-07-16 01:32:37.927352] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.193 [2024-07-16 01:32:37.927388] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.193 [2024-07-16 01:32:37.927396] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.193 [2024-07-16 01:32:37.927401] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.193 [2024-07-16 01:32:37.927407] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.193 [2024-07-16 01:32:37.927948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:12.193 [2024-07-16 01:32:37.928042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:12.193 [2024-07-16 01:32:37.928138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:12.193 [2024-07-16 01:32:37.928138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.758 Malloc0 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.758 [2024-07-16 01:32:38.631070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.758 [2024-07-16 01:32:38.656085] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3543933 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:12.758 01:32:38 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:12.758 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.318 01:32:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3543699 00:27:15.318 01:32:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 [2024-07-16 01:32:40.681710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 [2024-07-16 01:32:40.681914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Read completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 Write completed with error (sct=0, sc=8) 00:27:15.318 starting I/O failed 00:27:15.318 [2024-07-16 01:32:40.682100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.318 [2024-07-16 01:32:40.682214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.318 [2024-07-16 01:32:40.682230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.318 qpair failed and we were unable to recover it. 00:27:15.318 [2024-07-16 01:32:40.682418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.318 [2024-07-16 01:32:40.682428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.318 qpair failed and we were unable to recover it. 00:27:15.318 [2024-07-16 01:32:40.682606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.318 [2024-07-16 01:32:40.682614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.318 qpair failed and we were unable to recover it. 00:27:15.318 [2024-07-16 01:32:40.682709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.682719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.682819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.682827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.682981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.682991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.683252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.683283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.683439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.683469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.683738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.683767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.683900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.683909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.683994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.684003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.684250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.684259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.684469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.684479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.684645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.684655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.684735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.684744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.684848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.684857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.685058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.685069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.685241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.685251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.685456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.685488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.685617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.685644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.685835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.685864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.686070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.686099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.686372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.686403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.686540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.686569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.686746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.686775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.686892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.686920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.687117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.687127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.687202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.687213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.687288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.687296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.687447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.687456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.687599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.687609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.687839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.687868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.688136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.688164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.688434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.688464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.688584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.688613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.688759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.688788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.689001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.689030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.689313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.689349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.689574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.689603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.689746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.689756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.689972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.690002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.690204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.690234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.690403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.690433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.690664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.319 [2024-07-16 01:32:40.690673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.319 qpair failed and we were unable to recover it. 00:27:15.319 [2024-07-16 01:32:40.690771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.690780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.690925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.690935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.691095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.691105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.692089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.692116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.692310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.692321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.692408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.692417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.692586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.692595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.692755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.692765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.692921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.692930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.693015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.693024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.693201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.693214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.693373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.693382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.693573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.693587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.693737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.693750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.693959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.693973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.694070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.694083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.694229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.694242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.694393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.694405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.694555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.694568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.694743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.694755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.694858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.694870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.695040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.695052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.695270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.695299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.695517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.695548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.695762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.695792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.695922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.695934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.696201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.696213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.696431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.696461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.696671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.696700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.696843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.696872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.697099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.697111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.697261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.697273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.697427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.697440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.697643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.697672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.697788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.697817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.697925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.697953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.698132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.698144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.698317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.698365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.698563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.320 [2024-07-16 01:32:40.698592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.320 qpair failed and we were unable to recover it. 00:27:15.320 [2024-07-16 01:32:40.698732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.698761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.698989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.699002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.699222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.699235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.699448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.699461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.699635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.699647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.699767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.699780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.699935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.699947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.700051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.700064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.700232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.700244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.700443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.700456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.700627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.700639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.700740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.700751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.700891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.700904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.701045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.701057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.701258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.701273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.701380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.701394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.701479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.701490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.701637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.701650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.701806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.701818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.701908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.701920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.702075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.702086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.702262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.702275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.702427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.702440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.702586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.702599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.702744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.702757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.702845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.702857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.703071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.703085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.703252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.703264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.703448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.703480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.703669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.703698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.703893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.703921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.704140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.704155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.704319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.704334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.704525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.704555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.704770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.704800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.704939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.704968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.705232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.705260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.705413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.705442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.705577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.705606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.705859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.705889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.706079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.321 [2024-07-16 01:32:40.706098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.321 qpair failed and we were unable to recover it. 00:27:15.321 [2024-07-16 01:32:40.706303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.706313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.706450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.706462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.706548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.706558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.706661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.706671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.706747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.706756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.706971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.706981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.707071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.707080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.707220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.707230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.707390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.707400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.707495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.707504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.707579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.707588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.707707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.707717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.707891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.707901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.708157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.708186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.708396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.708427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.708626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.708636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.708764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.708774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.708922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.708931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.709155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.709164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.709385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.709395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.709478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.709487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.709648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.709658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.709736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.709745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.709925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.709935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.710103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.710112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.710256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.710266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.710405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.710417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.710590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.710620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.322 qpair failed and we were unable to recover it. 00:27:15.322 [2024-07-16 01:32:40.710802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.322 [2024-07-16 01:32:40.710831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.711024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.711054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.711243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.711273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.711463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.711495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.711689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.711719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.711891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.711921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.712143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.712172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.712306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.712335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.712545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.712575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.712775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.712805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.713077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.713112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.713404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.713434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.713619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.713649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.713806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.713816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.714023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.714052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.714182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.714212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.714359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.714401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.714552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.714581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.714708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.714738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.714970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.715000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.715279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.715309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.715628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.715658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.715768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.715797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.715981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.715991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.716146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.716155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.716433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.716463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.716660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.716690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.716872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.716902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.717115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.717125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.717271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.717281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.717430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.717440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.717618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.717628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.717825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.717835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.718035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.718064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.718250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.718280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.718572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.718607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.718818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.718848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.719164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.719194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.719328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.719480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.719764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.719807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.719953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.323 [2024-07-16 01:32:40.719962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.323 qpair failed and we were unable to recover it. 00:27:15.323 [2024-07-16 01:32:40.720056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.720065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.720275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.720284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.720460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.720471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.720569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.720578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.720673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.720686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.720842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.720852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.721020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.721030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.721269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.721298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.721507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.721539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.721781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.721817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.722053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.722062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.722258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.722267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.722481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.722491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.722646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.722656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.722795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.722805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.722884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.722893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.723099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.723109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.723206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.723216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.723305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.723314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.723479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.723491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.723698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.723727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.723926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.723955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.724198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.724228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.724427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.724458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.724650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.724681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.724812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.724842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.724961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.724971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.725155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.725165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.725411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.725442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.725638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.725668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.725855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.725886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.726089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.726118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.726345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.726379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.726535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.726565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.726849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.726880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.727073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.324 [2024-07-16 01:32:40.727083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.324 qpair failed and we were unable to recover it. 00:27:15.324 [2024-07-16 01:32:40.727175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.727184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.727439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.727449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.727589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.727599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.727812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.727841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.728030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.728060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.728363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.728400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.728667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.728697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.728846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.728856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.729069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.729099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.729366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.729398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.729590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.729620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.729852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.729861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.730043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.730053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.730184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.730196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.730390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.730401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.730550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.730559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.730713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.730722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.730816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.730825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.731030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.731058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.731297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.731326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.731597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.731633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.731858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.731868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.731971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.731980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.732080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.732089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.732242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.732251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.732397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.732407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.732630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.732660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.732857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.732887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.733152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.733181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.733468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.733502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.733777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.733807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.734115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.734145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.734409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.734441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.734722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.734751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.734889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.734919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.735158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.735167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.735345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.735355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.735533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.735562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.735695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.735724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.735919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.735948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.325 [2024-07-16 01:32:40.736239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.325 [2024-07-16 01:32:40.736249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.325 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.736483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.736494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.736637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.736647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.736792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.736802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.736937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.736947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.737201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.737230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.737414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.737448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.737586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.737615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.737801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.737811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.738006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.738035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.738275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.738304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.738585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.738616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.738861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.738891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.739214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.739249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.739515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.739545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.739688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.739697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.739923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.739953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.740169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.740198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.740479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.740509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.740661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.740690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.740838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.740868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.741056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.741085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.741269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.741279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.741480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.741492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.741589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.741598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.741734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.741743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.741878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.741887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.742100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.742110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.742279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.742299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.742501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.742532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.742716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.742745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.742947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.742976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.743100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.743120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.743321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.743330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.743510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.743520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.743735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.743764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.743888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.743917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.744099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.744128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.744393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.744424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.326 [2024-07-16 01:32:40.744559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.326 [2024-07-16 01:32:40.744595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.326 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.744725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.744735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.744902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.744911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.745154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.745164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.745380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.745391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.745613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.745623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.745700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.745709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.745955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.745965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.746109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.746118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.746356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.746387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.746601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.746630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.746761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.746790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.747007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.747016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.747164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.747174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.747260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.747271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.747371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.747381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.747476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.747486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.747657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.747667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.747754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.747762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.747848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.747857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.748106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.748116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.748266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.748275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.748515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.748525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.748654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.748663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.748827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.748837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.748986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.748995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.749189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.749198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.749330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.749347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.749505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.749515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.749603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.749612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.749765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.749774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.749873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.749882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.327 qpair failed and we were unable to recover it. 00:27:15.327 [2024-07-16 01:32:40.750032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.327 [2024-07-16 01:32:40.750042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.750189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.750199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.750279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.750288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.750420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.750430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.750523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.750532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.750621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.750630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.750700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.750708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.750872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.750883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.751121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.751131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.751346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.751377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.751620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.751650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.751816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.751826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.752013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.752043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.752246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.752275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.752480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.752511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.752706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.752736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.752875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.752905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.753029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.753058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.753306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.753316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.753484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.753495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.753592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.753602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.753693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.753702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.753853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.753865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.754022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.754031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.754331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.754373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.754519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.754549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.754685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.754715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.754849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.754879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.755073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.755101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.755358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.755368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.755571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.755581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.755753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.755762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.755856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.755865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.755955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.755964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.756053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.756061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.756302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.756312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.756529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.756559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.756754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.756783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.757062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.757091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.757255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.328 [2024-07-16 01:32:40.757265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.328 qpair failed and we were unable to recover it. 00:27:15.328 [2024-07-16 01:32:40.757465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.757476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.757585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.757597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.757708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.757718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.757805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.757815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.757908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.757919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.758042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.758053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.758151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.758163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.758362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.758374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.758523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.758535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.758634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.758646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.758843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.758854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.759113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.759122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.759265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.759275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.759344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.759354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.759456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.759464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.759605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.759615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.759813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.759823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.760060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.760070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.760264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.760274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.760366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.760375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.760524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.760533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.760665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.760675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.760891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.760902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.761196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.761205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.761406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.761417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.761611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.761621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.761740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.761749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.761879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.761888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.762018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.762027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.762199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.762209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.762284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.762293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.762519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.762530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.762664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.762673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.762765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.762774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.762908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.762917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.763150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.763160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.329 [2024-07-16 01:32:40.763330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.329 [2024-07-16 01:32:40.763343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.329 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.763508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.763518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.763663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.763672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.763857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.763887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.764160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.764189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.764313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.764354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.764595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.764625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.764766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.764776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.765032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.765062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.765304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.765333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.765548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.765580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.765766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.765795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.766093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.766123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.766431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.766500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.766753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.766787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.767011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.767041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.767304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.767333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.767641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.767672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.767812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.767842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.768162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.768177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.768282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.768298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.768403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.768418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.768627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.768642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.768799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.768813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.769011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.769041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.769306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.769335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.769585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.769624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.769821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.769850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.770054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.770083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.770328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.770346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.770455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.770469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.770573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.770587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.770798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.770812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.770976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.770991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.771229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.771259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.771472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.771503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.771632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.771661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.771816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.771831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.772088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.772116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.772312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.772351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.772554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.330 [2024-07-16 01:32:40.772584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.330 qpair failed and we were unable to recover it. 00:27:15.330 [2024-07-16 01:32:40.772729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.772760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.772985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.773015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.773222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.773236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.773394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.773409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.773563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.773592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.773733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.773762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.773979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.774010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.774189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.774204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.774353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.774369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.774510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.774525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.774628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.774642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.774801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.774815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Write completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Write completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Write completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Write completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Write completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Write completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Write completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Write completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Write completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Write completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 Read completed with error (sct=0, sc=8) 00:27:15.331 starting I/O failed 00:27:15.331 [2024-07-16 01:32:40.775124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.331 [2024-07-16 01:32:40.775306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.775349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.775498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.775512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.775592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.775601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.775722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.775732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.775880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.775890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.776075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.776084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.776223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.776233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.776451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.776461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.776543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.776552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.776710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.776719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.776853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.776862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.776972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.331 [2024-07-16 01:32:40.776981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.331 qpair failed and we were unable to recover it. 00:27:15.331 [2024-07-16 01:32:40.777117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.777127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.777204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.777213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.777349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.777362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.777454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.777463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.777665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.777675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.777813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.777823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.778103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.778112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.778188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.778197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.778422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.778432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.778632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.778643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.778735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.778743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.778841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.778854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.779006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.779016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.779259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.779268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.779499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.779509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.779605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.779614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.779820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.779829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.780105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.780115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.780265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.780275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.780365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.780374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.780482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.780492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.780713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.780747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.780943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.780972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.781176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.781206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.781393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.781404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.781563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.781573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.781670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.781678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.781772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.781780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.781866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.781875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.781957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.781967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.782116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.782125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.782262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.782272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.782501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.782512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.782672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.782682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.782773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.782782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.782880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.782890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.783097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.783107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.783252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.783262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.783466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.783476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.783557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.783566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.783757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.332 [2024-07-16 01:32:40.783767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.332 qpair failed and we were unable to recover it. 00:27:15.332 [2024-07-16 01:32:40.783862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.783871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.783977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.783986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.784210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.784220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.784302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.784311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.784409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.784418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.784505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.784514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.784650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.784660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.784817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.784826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.784923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.784931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.785004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.785013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.785246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.785256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.785499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.785533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.785676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.785705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.786024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.786054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.786305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.786314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.786525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.786535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.786683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.786693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.786838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.786847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.787072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.787082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.787224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.787234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.787461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.787474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.787682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.787692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.787827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.787856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.788160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.788190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.788409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.788440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.788587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.788616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.788756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.788766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.788918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.788928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.789098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.789125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.789306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.789347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.789552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.789583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.789725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.789734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.789884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.789894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.790057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.790067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.790286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.790296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.790489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.790520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.790798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.790828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.333 [2024-07-16 01:32:40.791038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.333 [2024-07-16 01:32:40.791067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.333 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.791221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.791231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.791423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.791454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.791658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.791688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.791815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.791845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.792168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.792177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.792325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.792335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.792509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.792540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.792688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.792718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.792873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.792902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.793192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.793222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.793474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.793508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.793710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.793740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.793914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.793943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.794136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.794166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.794471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.794481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.794677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.794687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.794852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.794882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.795095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.795125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.795433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.795469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.795613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.795642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.795863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.795892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.796088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.796118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.796382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.796420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.796610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.796640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.796784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.796793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.796889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.796899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.797008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.797018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.797158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.797168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.797242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.797251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.797332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.797345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.797549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.797558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.797648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.797657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.797784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.797794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.334 [2024-07-16 01:32:40.797860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.334 [2024-07-16 01:32:40.797869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.334 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.798039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.798049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.798204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.798214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.798371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.798381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.798548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.798558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.798698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.798708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.798914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.798924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.799086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.799096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.799310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.799354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.799565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.799597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.799865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.799896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.800186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.800215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.800411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.800443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.800637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.800667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.800863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.800893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.801139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.801169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.801420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.801451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.801637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.801668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.801911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.801941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.802084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.802093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.802253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.802263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.802398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.802408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.802557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.802566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.802723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.802733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.802814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.802823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.802998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.803008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.803137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.803146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.803324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.803334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.803579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.803589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.803671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.803682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.803831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.803840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.803920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.803929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.804233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.804264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.804461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.804492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.804631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.804661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.804785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.804816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.804956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.804965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.805147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.805174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.805368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.805399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.805594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.805624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.805804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.805834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.335 qpair failed and we were unable to recover it. 00:27:15.335 [2024-07-16 01:32:40.806023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.335 [2024-07-16 01:32:40.806052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.806264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.806294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.806520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.806552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.806749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.806778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.806910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.806919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.807138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.807167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.807367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.807409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.807666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.807696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.807881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.807910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.808093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.808123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.808260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.808270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.808475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.808485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.808688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.808698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.808846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.808856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.809106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.809135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.809450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.809521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.809826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.809860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.810027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.810042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.810282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.810312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.810649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.810717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.810943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.810960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.811193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.811208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.811318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.811333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.811525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.811540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.811701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.811717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.811824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.811839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.812080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.812094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.812252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.812266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.812529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.812560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.812835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.812865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.813129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.813158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.336 [2024-07-16 01:32:40.813427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.336 [2024-07-16 01:32:40.813442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.336 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.813619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.813633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.813774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.813789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.813874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.813887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.814102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.814117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.814291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.814321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.814538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.814568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.814709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.814738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.814953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.814982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.815214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.815243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.815436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.815465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.815712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.815749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.815916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.815926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.816077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.816100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.816333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.816387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.816578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.816608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.816748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.816778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.816983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.817012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.817290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.817320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.817604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.817635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.817777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.817807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.818115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.818144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.818436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.818472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.818667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.818697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.818881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.818916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.819189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.819219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.819400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.819410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.819503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.819512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.819710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.819719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.819810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.819819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.819947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.819957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.820198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.820228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.820475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.820506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.820702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.820732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.820864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.337 [2024-07-16 01:32:40.820894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.337 qpair failed and we were unable to recover it. 00:27:15.337 [2024-07-16 01:32:40.821128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.821158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.821366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.821376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.821544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.821554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.821700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.821710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.821908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.821918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.822008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.822016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.822197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.822207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.822283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.822291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.822489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.822500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.822588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.822597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.822679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.822689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.822844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.822854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.822944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.822953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.823150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.823159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.823392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.823403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.823618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.823628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.823828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.823838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.823995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.824004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.824251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.824281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.824509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.824539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.824726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.824755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.824942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.824972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.825237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.825266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.825487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.825518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.825692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.825722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.825856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.825886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.826179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.826208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.826488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.826499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.826696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.826706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.826783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.826794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.826857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.826866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.827022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.827032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.827107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.827116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.827335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.827351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.827502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.827512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.827652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.827662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.827811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.827820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.827968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.827978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.828111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.828121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.828287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.828317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.828572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.338 [2024-07-16 01:32:40.828603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.338 qpair failed and we were unable to recover it. 00:27:15.338 [2024-07-16 01:32:40.828801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.828829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.829068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.829097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.829376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.829407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.829699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.829728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.830004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.830033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.830267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.830277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.830434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.830445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.830539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.830548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.830703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.830712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.830861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.830870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.831066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.831075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.831248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.831258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.831397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.831407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.831539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.831569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.831835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.831865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.832068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.832136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.832328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.832371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.832508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.832538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.832681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.832709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.832889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.832918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.833114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.833143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.833404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.833435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.833576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.833605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.833805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.833835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.834035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.834049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.834301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.834315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.834558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.834574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.834715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.834729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.835001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.835031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.835291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.835322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.835615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.835645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.835831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.339 [2024-07-16 01:32:40.835860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.339 qpair failed and we were unable to recover it. 00:27:15.339 [2024-07-16 01:32:40.836138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.836167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.836466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.836481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.836718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.836732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.836837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.836851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.837006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.837020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.837254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.837268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.837437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.837452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.837561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.837576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.837685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.837699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.837859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.837873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.838048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.838063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.838208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.838217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.838445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.838479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.838690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.838720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.838874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.838904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.839192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.839201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.839422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.839432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.839607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.839617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.839712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.839721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.839888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.839897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.840071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.840106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.840359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.840400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.840592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.840622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.840806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.840836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.841060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.841091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.841279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.841307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.841528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.841567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.841759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.841789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.841924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.841939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.842115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.842130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.842314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.842355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.842573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.842604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.842735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.842764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.842960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.340 [2024-07-16 01:32:40.842989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.340 qpair failed and we were unable to recover it. 00:27:15.340 [2024-07-16 01:32:40.843176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.843205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.843391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.843423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.843557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.843586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.843779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.843809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.844058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.844088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.844352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.844367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.844509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.844524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.844676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.844691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.844867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.844882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.844980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.844995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.845100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.845111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.845253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.845263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.845347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.845356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.845510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.845519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.845602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.845611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.845762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.845772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.845872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.845883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.846031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.846041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.846132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.846141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.846351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.846361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.846452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.846460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.846607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.846616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.846813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.846823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.846970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.846980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.847113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.847122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.847370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.847400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.847527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.847557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.847681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.847711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.847839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.847869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.848006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.848035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.848219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.848229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.848375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.848386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.848494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.848503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.848609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.848618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.848714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.848724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.848811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.848821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.848951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.848961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.849092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.849102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.849318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.849329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.849429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.849439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.341 [2024-07-16 01:32:40.849635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.341 [2024-07-16 01:32:40.849645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.341 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.849790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.849800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.849899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.849908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.850176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.850186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.850391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.850401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.850544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.850553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.850740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.850750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.850979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.851009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.851293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.851322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.851528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.851558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.851766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.851796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.852003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.852013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.852208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.852217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.852347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.852360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.852515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.852525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.852690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.852700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.852842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.852856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.852928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.852938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.853113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.853123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.853272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.853282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.853455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.853466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.853550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.853559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.853704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.853714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.853818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.853827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.854002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.854011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.854217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.854227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.854316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.854325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.854488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.854498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.854581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.854589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.854790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.854799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.854898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.854907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.855051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.855061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.855194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.855204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.855357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.855367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.855602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.855612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.855762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.855771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.855998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.856007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.342 [2024-07-16 01:32:40.856248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.342 [2024-07-16 01:32:40.856257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.342 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.856429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.856440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.856535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.856544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.856762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.856772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.856912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.856922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.857078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.857087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.857218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.857228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.857378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.857388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.857542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.857552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.857654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.857664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.857760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.857770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.857835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.857844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.857991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.858001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.858183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.858213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.858328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.858378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.858584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.858614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.858881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.858911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.859156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.859166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.859302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.859311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.859475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.859487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.859712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.859742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.859920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.859950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.860159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.860169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.860388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.860399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.860547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.860557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.860655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.860664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.860742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.860751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.860909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.860918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.861174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.861183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.861417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.861448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.861642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.861672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.861862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.861891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.862166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.862195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.343 qpair failed and we were unable to recover it. 00:27:15.343 [2024-07-16 01:32:40.862393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.343 [2024-07-16 01:32:40.862425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.862683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.862713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.862861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.862889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.863159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.863189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.863474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.863484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.863679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.863689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.863849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.863878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.864011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.864041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.864225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.864254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.864519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.864529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.864698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.864708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.864879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.864908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.865104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.865133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.865322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.865372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.865640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.865655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.865763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.865778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.866035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.866050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.866281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.866310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.866587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.866616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.866839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.866868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.867052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.867080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.867287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.867316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.867590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.867605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.867843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.867857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.868001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.868015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.868258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.868287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.868497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.868527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.868668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.868699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.868939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.868968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.869255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.869285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.869492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.869508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.869667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.869680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.869868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.869897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.870161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.870191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.870418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.870450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.870576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.870605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.870861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.870892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.871152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.871161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.871291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.871301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.871477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.871488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.871583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.871595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.344 qpair failed and we were unable to recover it. 00:27:15.344 [2024-07-16 01:32:40.871758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.344 [2024-07-16 01:32:40.871768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.872011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.872021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.872169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.872178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.872329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.872346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.872533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.872544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.872625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.872633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.872734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.872744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.872969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.872998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.873208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.873237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.873426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.873457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.873613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.873623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.873717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.873726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.873881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.873893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.874142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.874151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.874354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.874364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.874480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.874490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.874667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.874676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.874752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.874760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.874983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.874993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.875233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.875243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.875431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.875441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.875528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.875538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.875683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.875693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.875974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.875983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.876126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.876135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.876332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.876350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.876501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.876511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.876684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.876694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.876843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.876852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.877074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.877084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.877280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.877290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.877517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.877527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.877615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.877624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.345 [2024-07-16 01:32:40.877714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.345 [2024-07-16 01:32:40.877723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.345 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.877918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.877928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.878108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.878118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.878262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.878272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.878417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.878427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.878669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.878678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.878820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.878829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.879080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.879090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.879224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.879233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.879453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.879463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.879624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.879634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.879733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.879743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.879814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.879824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.879984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.879994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.880256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.880266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.880431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.880441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.880589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.880598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.880742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.880752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.880847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.880858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.881007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.881018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.881259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.881269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.881411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.881422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.881613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.881642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.881890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.881920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.882128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.882138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.882281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.882290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.882431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.882442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.882529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.882538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.882621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.882629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.882837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.882847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.883077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.883107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.883297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.883326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.883543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.883572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.883765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.883795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.883922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.883952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.884127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.884156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.884345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.884358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.884503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.884513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.884615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.884624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.884775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.884785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.884929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.346 [2024-07-16 01:32:40.884939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.346 qpair failed and we were unable to recover it. 00:27:15.346 [2024-07-16 01:32:40.885133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.885142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.885308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.885353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.885496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.885525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.885649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.885678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.885814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.885843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.885989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.886030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.886281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.886311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.886590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.886625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.886820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.886850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.887113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.887143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.887348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.887380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.887623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.887653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.887811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.887840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.888110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.888140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.888389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.888420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.888556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.888587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.888798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.888828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.888952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.888982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.889204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.889245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.889429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.889444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.889618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.889647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.889761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.889791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.889972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.890002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.890255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.890273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.890446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.890462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.890635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.890664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.890791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.890821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.891079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.891109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.891305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.891320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.891425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.891441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.891550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.891565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.891715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.891729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.891835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.891847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.892000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.892010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.892229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.892238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.892370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.892381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.892526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.892536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.892708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.892718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.892870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.892879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.892957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.347 [2024-07-16 01:32:40.892965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.347 qpair failed and we were unable to recover it. 00:27:15.347 [2024-07-16 01:32:40.893136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.893146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.893265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.893275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.893424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.893434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.893516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.893524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.893665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.893675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.893806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.893816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.894049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.894079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.894209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.894237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.894474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.894505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.894653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.894683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.894813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.894842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.895106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.895135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.895321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.895359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.895553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.895583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.895824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.895853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.896062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.896092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.896331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.896381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.896678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.896707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.896847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.896883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.897139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.897168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.897382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.897413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.897631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.897660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.897804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.897833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.897961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.897991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.898199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.898229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.898497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.898528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.898718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.898748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.899001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.899030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.899227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.899256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.899476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.899486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.899683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.899693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.899841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.899850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.900051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.900081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.900276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.900305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.900578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.900612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.900806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.900836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.901020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.901049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.901297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.901306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.901417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.901427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.901526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.901536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.901735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.348 [2024-07-16 01:32:40.901744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.348 qpair failed and we were unable to recover it. 00:27:15.348 [2024-07-16 01:32:40.901890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.901899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.901981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.901990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.902303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.902333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.902641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.902671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.902822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.902853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.903068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.903097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.903350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.903381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.903565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.903595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.903837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.903866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.904178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.904214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.904435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.904446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.904521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.904530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.904685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.904694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.904849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.904858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.905013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.905022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.905238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.905248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.905388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.905398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.905546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.905558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.905755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.905765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.905930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.905940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.906165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.906175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.906333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.906354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.906577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.906607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.906750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.906779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.906960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.906990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.907227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.907236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.907327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.907341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.907496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.907506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.907676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.907685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.907891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.907920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.908043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.908073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.908334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.908401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.908576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.908586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.908716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.908726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.908859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.908869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.349 [2024-07-16 01:32:40.909060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.349 [2024-07-16 01:32:40.909070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.349 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.909224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.909234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.909395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.909405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.909505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.909515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.909681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.909691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.909898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.909908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.910124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.910134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.910365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.910375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.910523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.910533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.910675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.910685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.910778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.910788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.910862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.910871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.911079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.911089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.911220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.911229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.911401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.911411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.911553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.911563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.911708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.911718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.911785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.911794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.911937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.911946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.912152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.912162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.912315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.912324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.912522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.912533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.912701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.912712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.912900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.912929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.913197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.913227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.913437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.913468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.913745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.913775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.913968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.913999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.914258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.914287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.914482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.914493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.914677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.914687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.914825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.914835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.914914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.914923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.915133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.915143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.915280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.915289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.915510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.915541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.915732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.915762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.915892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.915921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.916160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.916189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.916472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.916506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.916702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.916732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.350 qpair failed and we were unable to recover it. 00:27:15.350 [2024-07-16 01:32:40.916978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.350 [2024-07-16 01:32:40.917007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.917185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.917195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.917460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.917492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.917621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.917650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.917898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.917927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.918135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.918164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.918411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.918443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.918637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.918666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.918891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.918959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.919165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.919198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.919395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.919429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.919612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.919627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.919725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.919739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.919903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.919919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.920137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.920152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.920349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.920381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.920568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.920597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.920812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.920841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.921036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.921064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.921330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.921369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.921604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.921619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.921727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.921742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.921886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.921902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.922144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.922159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.922334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.922352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.922558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.922574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.922688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.922700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.922819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.922829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.923021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.923031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.923182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.923210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.923461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.923492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.923710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.923739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.923913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.923942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.924133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.924162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.924312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.924354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.924503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.924536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.924803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.924833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.925062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.925092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.925362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.925378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.925532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.925546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.925661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.925676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.351 [2024-07-16 01:32:40.925819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.351 [2024-07-16 01:32:40.925834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.351 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.926014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.926052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.926252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.926282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.926434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.926465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.926656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.926685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.926831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.926861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.927120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.927150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.927281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.927296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.927484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.927526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.927701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.927731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.927862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.927891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.928144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.928173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.928376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.928407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.928672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.928703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.928840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.928869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.929085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.929115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.929300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.929328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.929521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.929551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.929836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.929865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.930072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.930101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.930334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.930371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.930504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.930522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.930683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.930698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.930795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.930808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.931033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.931046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.931205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.931232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.931504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.931536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.931717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.931746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.931998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.932027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.932224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.932254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.932449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.932460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.932535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.932545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.932693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.932702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.932852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.932862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.932959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.932968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.933175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.933185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.934246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.934268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.934515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.934526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.934630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.934639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.934864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.934874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.935048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.935058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.352 qpair failed and we were unable to recover it. 00:27:15.352 [2024-07-16 01:32:40.935290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.352 [2024-07-16 01:32:40.935320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.935557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.935587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.935758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.935789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.935984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.936013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.936205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.936243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.936334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.936353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.936438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.936447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.937360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.937381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.937539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.937550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.937753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.937763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.937865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.937875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.937962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.937971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.938070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.938079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.938161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.938170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.938270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.938279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.938354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.938364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.938446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.938455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.938547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.938555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.938686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.938696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.938779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.938788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.938874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.938886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.938967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.938975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.939048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.939057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.939123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.939133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.939276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.939285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.939360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.939369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.939501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.939510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.939579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.939588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.939677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.939686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.939764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.939773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.939864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.939873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.939944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.939953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.940016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.940026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.940112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.940121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.940220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.940229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.940316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.940325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.940475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.940486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.940552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.940560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.940652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.940661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.940745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.940754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.940832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.353 [2024-07-16 01:32:40.940841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.353 qpair failed and we were unable to recover it. 00:27:15.353 [2024-07-16 01:32:40.940917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.940926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.941021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.941030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.941116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.941125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.941200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.941210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.941280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.941290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.941381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.941390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.942074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.942095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.942191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.942201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.942264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.942273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.942348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.942357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.942427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.942437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.942505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.942514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.942600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.942609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.942745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.942754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.942837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.942846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.942972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.942982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.943113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.943123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.943256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.943268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.943328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.943342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.943432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.943445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.943532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.943542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.943687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.943697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.943780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.943788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.943863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.943872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.943944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.943953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.944019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.944028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.944160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.944169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.944239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.944248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.944314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.944323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.944416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.944426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.944559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.944568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.944641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.944650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.944727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.944737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.944885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.944894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.944972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.944981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.354 [2024-07-16 01:32:40.945112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.354 [2024-07-16 01:32:40.945120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.354 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.945187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.945196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.945280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.945288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.945365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.945376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.945470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.945479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.945642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.945650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.945712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.945721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.945794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.945803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.945973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.945983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.946140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.946170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.946282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.946312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.946485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.946552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.946704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.946737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.946956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.946987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.947092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.947123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.947302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.947331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.947458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.947473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.947703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.947715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.947800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.947810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.947963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.947999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.948207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.948237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.948366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.948400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.948587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.948596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.948792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.948801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.948876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.948888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.949012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.949021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.949165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.949174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.949316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.949325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.949420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.949431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.949506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.949516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.949583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.949592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.949736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.949746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.949837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.949846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.949918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.949927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.949995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.950004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.950136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.950147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.950233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.950243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.950397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.950407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.950585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.950596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.950800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.950809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.950979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.355 [2024-07-16 01:32:40.950989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.355 qpair failed and we were unable to recover it. 00:27:15.355 [2024-07-16 01:32:40.951144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.951154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.951240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.951249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.951312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.951321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.951390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.951399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.951552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.951562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.951772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.951782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.951865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.951874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.951937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.951946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.952041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.952050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.952192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.952201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.952370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.952380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.952514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.952523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.952595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.952603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.952735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.952744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.952817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.952825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.952955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.952964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.953038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.953121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.953205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.953355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.953439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.953517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.953596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.953686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.953759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.953840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.953915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.953994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.954002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.954058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.954067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.954133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.954141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.954221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.954231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.954380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.954390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.954457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.954467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.954545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.954554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.954628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.954638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.954716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.954725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.954855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.954863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.954999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.955008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.955088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.955097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.955254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.955263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.356 [2024-07-16 01:32:40.955414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.356 [2024-07-16 01:32:40.955423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.356 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.955508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.955517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.955577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.955586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.955673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.955682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.955752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.955761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.955895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.955905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.956103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.956113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.956184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.956193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.956272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.956281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.956481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.956490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.956564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.956573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.956639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.956648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.956722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.956731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.956797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.956806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.956931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.956941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.957006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.957015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.957192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.957202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.957292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.957301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.957422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.957433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.957497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.957506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.957577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.957587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.957668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.957677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.957754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.957763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.957981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.957993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.958131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.958141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.958212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.958221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.958295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.958304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.958437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.958447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.958512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.958522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.958646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.958655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.958729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.958738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.958806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.958816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.958882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.958891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.958967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.958977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.959099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.959109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.959175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.959184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.959278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.959287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.959421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.959431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.959515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.959524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.959600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.959609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.959744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.959754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.357 qpair failed and we were unable to recover it. 00:27:15.357 [2024-07-16 01:32:40.959823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.357 [2024-07-16 01:32:40.959833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.959906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.959915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.960048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.960057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.960209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.960219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.960288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.960297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.960369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.960378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.960576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.960586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.960651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.960661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.960723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.960733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.960895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.960905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.960982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.960991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.961124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.961134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.961281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.961291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.961365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.961376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.961526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.961536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.961629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.961638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.961803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.961813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.961973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.961983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.962047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.962056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.962149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.962159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.962226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.962236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.962304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.962313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.962381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.962394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.962520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.962530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.962599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.962608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.962689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.962698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.962759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.962768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.962844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.962853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.962945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.962954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.963020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.963029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.963090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.963099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.963237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.963262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.963347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.963357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.963495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.963505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.963639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.963649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.963725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.963734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.963824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.963833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.963990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.358 [2024-07-16 01:32:40.964001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.358 qpair failed and we were unable to recover it. 00:27:15.358 [2024-07-16 01:32:40.964135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.964145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.964225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.964234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.964385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.964395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.964463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.964473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.964624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.964635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.964707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.964717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.964788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.964798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.964970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.964980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.965127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.965138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.965209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.965219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.965285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.965294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.965399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.965421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.965512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.965526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.965603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.965617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.965771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.965786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.965946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.965961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.966107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.966122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.966203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.966217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.966371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.966387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.966486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.966501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.966671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.966685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.966847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.966862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.967036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.967065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.967247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.967262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.967394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.967410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.967514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.967529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.967624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.967638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.967737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.967750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.967857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.967872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.968018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.968033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.968121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.968136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.968279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.968294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.968393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.968408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.968546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.968561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.968765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.968780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.968922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.968936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.969074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.969089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.969186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.969200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.969304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.969325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.969430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.969446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.969543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.969557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.969654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.969668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.969738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.969752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.969945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.969960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.359 [2024-07-16 01:32:40.970047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.359 [2024-07-16 01:32:40.970061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.359 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.970152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.970167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.970393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.970409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.970499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.970513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.970600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.970614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.970702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.970718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.970868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.970883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.971029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.971040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.971134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.971145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.971284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.971294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.971375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.971386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.971476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.971486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.971565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.971575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.971642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.971652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.971857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.971867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.971934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.971943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.972094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.972105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.972182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.972192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.972272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.972283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.972387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.972398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.972554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.972564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.972658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.972674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.972751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.972766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.972922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.972937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.973073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.973084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.973162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.973171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.973313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.973324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.973409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.973421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.973561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.973572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.973661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.973671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.973837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.973847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.973920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.973930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.974002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.974011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.974157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.974182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.974252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.974262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.974345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.974367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.974465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.974475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.974551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.974561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.974630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.974639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.974709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.974719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.974800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.974810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.974908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.974918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.974983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.974992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.975073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.975083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.360 [2024-07-16 01:32:40.975227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.360 [2024-07-16 01:32:40.975237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.360 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.975298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.975307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.975398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.975419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.975494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.975504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.975573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.975585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.975748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.975759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.975815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.975824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.975910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.975919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.976018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.976028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.976164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.976174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.976260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.976270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.976470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.976480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.976538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.976547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.976620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.976629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.976722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.976732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.976805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.976815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.976898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.976907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.976989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.977000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.977084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.977093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.977156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.977166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.977366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.977378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.977447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.977457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.977533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.977542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.977609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.977619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.977715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.977725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.977804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.977813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.977950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.977959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.978032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.978042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.978108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.978118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.978194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.978204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.978264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.978273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.978364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.978374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.978448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.978458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.978527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.978537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.978624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.978634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.978785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.978794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.978931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.978941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.979091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.979102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.979258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.979268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.979520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.979530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.979636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.979646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.979787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.979797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.979899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.979909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.980050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.980060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.980274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.980283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.980375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.980385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.980474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.980484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.361 [2024-07-16 01:32:40.980614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.361 [2024-07-16 01:32:40.980623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.361 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.980708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.980717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.980855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.980864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.981066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.981076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.981203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.981212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.981314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.981324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.981490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.981501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.981671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.981700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.982859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.982880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.983191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.983227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.983420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.983442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.983564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.983579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.983685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.983699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.983880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.983895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.983995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.984010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.984110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.984124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.984277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.984292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.984469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.984484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.984566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.984580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.984722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.984736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.984983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.985012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.985292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.985322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.985456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.985470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.985623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.985637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.985736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.985750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.986005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.986034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.986226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.986255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.986443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.986474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.986734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.986748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.986851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.986865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.986971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.986985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.987077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.987091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.987237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.987251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.987395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.987410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.987525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.987540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.987747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.987761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.987864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.987879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.987996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.988011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.988187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.988201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.988485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.988500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.988709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.988724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.988832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.362 [2024-07-16 01:32:40.988846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.362 qpair failed and we were unable to recover it. 00:27:15.362 [2024-07-16 01:32:40.988999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.989019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.989120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.989131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.989332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.989350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.989431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.989441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.989519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.989528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.989631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.989640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.989733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.989742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.989901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.989910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.990145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.990156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.990373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.990402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.990534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.990563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.990749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.990778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.990924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.990953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.991218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.991248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.991450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.991480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.991603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.991613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.991702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.991712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.991807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.991817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.992009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.992018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.992093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.992101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.992330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.992374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.992563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.992592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.993782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.993803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.993905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.993915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.994016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.994026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.994108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.994117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.994211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.994220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.994453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.994485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.994697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.994728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.994860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.994889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.995013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.995044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.363 [2024-07-16 01:32:40.995255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.363 [2024-07-16 01:32:40.995285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.363 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.995443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.995474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.995672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.995701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.995888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.995917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.996168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.996198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.996371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.996381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.996486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.996495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.996639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.996649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.996785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.996796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.996896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.996905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.997178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.997187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.997264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.997273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.997370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.997381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.997621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.997631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.997723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.997732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.997801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.997809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.997943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.997952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.998178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.998189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.998411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.998421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.998502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.998511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.998608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.998617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.998786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.998796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.999006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.999016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.999152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.999161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.999241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.999250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.999394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.999404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.999497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.999506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.999671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.999681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.999831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.999841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.999921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:40.999929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:40.999996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:41.000005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:41.000146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:41.000155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:41.000402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:41.000412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:41.000492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:41.000501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:41.000606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:41.000614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:41.000710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:41.000719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:41.000798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:41.000807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:41.000947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:41.000956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:41.001091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:41.001101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:41.001260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:41.001270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.364 [2024-07-16 01:32:41.001492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.364 [2024-07-16 01:32:41.001503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.364 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.001699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.001709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.001907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.001917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.002156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.002166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.002403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.002415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.002502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.002511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.002709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.002719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.002805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.002814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.003051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.003061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.003269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.003278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.003377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.003386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.003553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.003563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.003650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.003659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.003818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.003828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.003920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.003929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.004013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.004022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.004225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.004235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.004378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.004390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.004544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.004553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.004717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.004754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.004931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.004948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.005174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.005189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.005401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.005417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.005518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.005532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.005698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.005712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.005856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.005870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.006076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.006090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.006279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.006294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.006383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.006397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.006511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.006525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.006618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.006632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.006795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.006810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.006988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.007003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.007239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.007253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.007409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.007425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.007707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.007720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.007922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.007932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.008152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.008162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.008238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.008247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.008444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.008455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.008680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.365 [2024-07-16 01:32:41.008691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.365 qpair failed and we were unable to recover it. 00:27:15.365 [2024-07-16 01:32:41.008769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.008778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.008914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.008924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.009094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.009104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.009198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.009210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.009453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.009463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.009605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.009615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.009696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.009705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.009932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.009942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.010118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.010128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.010354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.010368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.010467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.010477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.010621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.010630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.010725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.010734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.010949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.010959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.011120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.011131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.011290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.011300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.011456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.011466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.011613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.011624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.011797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.011807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.012025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.012036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.012301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.012312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.012398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.012407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.012628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.012638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.012796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.012806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.013037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.013047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.013193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.013204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.013427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.013437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.013525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.013534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.013710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.013720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.013862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.013872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.014006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.014016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.014175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.366 [2024-07-16 01:32:41.014185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.366 qpair failed and we were unable to recover it. 00:27:15.366 [2024-07-16 01:32:41.014314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.014324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.014589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.014601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.014749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.014759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.014842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.014852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.015060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.015071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.015214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.015224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.015386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.015397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.015588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.015598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.015769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.015779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.015991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.016002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.016169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.016179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.016268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.016279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.016443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.016454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.016683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.016693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.016840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.016850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.017022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.017032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.017282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.017292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.017498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.017508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.017650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.017660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.017857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.017868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.018111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.018122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.018271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.018283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.018455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.018467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.018605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.018615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.018826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.018837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.018982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.019008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.019177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.019188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.019419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.019431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.019662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.019673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.019757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.019767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.019855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.019867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.020039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.020050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.020215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.020226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.020464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.020475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.020628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.020639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.020728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.020738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.020936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.020947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.021028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.021038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.367 [2024-07-16 01:32:41.021181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.367 [2024-07-16 01:32:41.021192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.367 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.021448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.021460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.021602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.021613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.021725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.021736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.021822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.021831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.021937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.021948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.022121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.022132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.022382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.022395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.022550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.022561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.022802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.022813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.022982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.022992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.023132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.023143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.023313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.023325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.023567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.023582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.023798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.023809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.023898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.023912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.024006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.024019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.024256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.024266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.024410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.024420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.024499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.024509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.024579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.024589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.024666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.024676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.024817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.024828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.024973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.024984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.025060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.025070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.025233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.025244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.025395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.025407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.025609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.025619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.025793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.025804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.025880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.025890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.026000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.026010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.026089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.026099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.026270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.026282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.026481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.026493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.026574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.026584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.026755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.026765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.368 [2024-07-16 01:32:41.026978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.368 [2024-07-16 01:32:41.026989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.368 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.027204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.027214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.027408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.027420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.027556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.027566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.027682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.027694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.027855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.027865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.027944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.027954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.028120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.028130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.028329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.028345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.028505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.028515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.028672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.028682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.028764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.028773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.028860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.028868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.028946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.028956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.029167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.029177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.029278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.029289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.029368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.029377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.029454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.029467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.029549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.029558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.029738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.029750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.029840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.029850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.030032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.030043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.030193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.030203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.030344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.030358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.030609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.030620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.030686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.030695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.030853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.030864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.030977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.030988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.031184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.031194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.031295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.031306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.031471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.031481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.031565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.031574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.031719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.031729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.031871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.031882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.031973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.031983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.032196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.032207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.032404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.032415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.032559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.032570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.369 [2024-07-16 01:32:41.032656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.369 [2024-07-16 01:32:41.032666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.369 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.032857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.032868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.033027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.033037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.033193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.033203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.033393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.033404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.033627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.033637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.033760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.033770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.033856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.033866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.033935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.033944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.034095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.034106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.034200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.034211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.034390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.034401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.034541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.034551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.034819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.034830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.035007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.035019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.035165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.035175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.035250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.035260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.035352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.035362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.035476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.035486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.035560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.035571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.035654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.035664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.035748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.035757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.035839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.035849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.035934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.035943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.036106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.036116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.036262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.036272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.036364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.036374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.036510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.036520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.036592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.036602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.036706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.036715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.036890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.036901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.037058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.037068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.037216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.037226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.037314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.037324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.037472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.037483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.037591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b040 is same with the state(5) to be set 00:27:15.370 [2024-07-16 01:32:41.037721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.037743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.037905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.037920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.038084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.038098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.038256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.038271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.038478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.370 [2024-07-16 01:32:41.038495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.370 qpair failed and we were unable to recover it. 00:27:15.370 [2024-07-16 01:32:41.038602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.038617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.038726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.038741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.038975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.038990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.039210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.039224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.039453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.039466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.039628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.039639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.039808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.039819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.039886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.039895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.040080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.040090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.040315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.040326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.040502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.040513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.040648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.040658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.040742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.040754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.040922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.040933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.041083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.041094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.041290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.041300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.041410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.041421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.041645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.041657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.041805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.041815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.041923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.041935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.042068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.042078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.042246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.042257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.042444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.042456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.042610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.042620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.042715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.042725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.042819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.042829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.042921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.042930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.042996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.043005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.043149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.043158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.043382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.043410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.043605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.043634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.043898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.043928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.044119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.044153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.044336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.371 [2024-07-16 01:32:41.044375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.371 qpair failed and we were unable to recover it. 00:27:15.371 [2024-07-16 01:32:41.044549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.044559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.044659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.044669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.044889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.044900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.045069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.045079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.045316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.045326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.045489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.045500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.045652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.045663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.045768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.045778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.045910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.045920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.046173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.046184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.046267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.046277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.046472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.046485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.046625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.046635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.046724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.046735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.046914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.046925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.047080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.047115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.047245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.047274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.047516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.047547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.047798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.047808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.048009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.048019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.048244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.048254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.048393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.048403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.048547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.048557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.048756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.048768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.048975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.048985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.049213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.049223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.049385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.049394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.049543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.049554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.049781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.049791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.049934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.049944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.050192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.050202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.050286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.050295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.050428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.050440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.050663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.050692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.050863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.050892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.051081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.051110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.051376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.051407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.051533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.051543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.051739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.051751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.372 qpair failed and we were unable to recover it. 00:27:15.372 [2024-07-16 01:32:41.051998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.372 [2024-07-16 01:32:41.052028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.052285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.052314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.052524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.052535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.052757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.052767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.052957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.052967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.053170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.053199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.053477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.053507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.053693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.053722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.053965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.053975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.054242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.054252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.054450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.054462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.054641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.054651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.054867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.054896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.055083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.055112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.055393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.055424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.055693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.055703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.055833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.055843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.056074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.056103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.056370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.056401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.056509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.056538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.056706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.056716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.056922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.056931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.057110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.057119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.057359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.057368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.057514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.057523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.057596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.057605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.057801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.057811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.057888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.057897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.058053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.058063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.058230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.058239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.058406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.058417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.058674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.058683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.058840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.058849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.059059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.059068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.059152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.059161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.059406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.059437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.059694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.059723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.059913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.059942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.060122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.060151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.060412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.060448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.373 qpair failed and we were unable to recover it. 00:27:15.373 [2024-07-16 01:32:41.060716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.373 [2024-07-16 01:32:41.060726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.060880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.060890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.060980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.060989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.061188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.061197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.061381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.061391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.061528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.061538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.061668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.061678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.061752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.061761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.061928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.061938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.062135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.062145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.062275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.062285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.062474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.062485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.062586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.062616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.062888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.062918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.063138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.063167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.063420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.063451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.063711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.063740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.063923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.063952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.064116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.064126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.064357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.064388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.064624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.064654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.064860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.064870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.065019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.065029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.065191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.065201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.065344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.065354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.065494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.065504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.065656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.065666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.065754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.065763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.065865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.065874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.065947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.065956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.066098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.066108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.066196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.066204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.066301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.066310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.066377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.066388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.066555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.066564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.066813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.066823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.066898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.066907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.066984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.066993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.067144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.067154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.067238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.067248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.374 qpair failed and we were unable to recover it. 00:27:15.374 [2024-07-16 01:32:41.067391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.374 [2024-07-16 01:32:41.067401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.067486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.067495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.067706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.067716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.067870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.067879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.068090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.068100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.068270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.068280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.068367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.068376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.068574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.068584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.068739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.068777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.069031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.069061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.069249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.069279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.069476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.069486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.069733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.069763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.069954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.069984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.070191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.070221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.070485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.070519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.070646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.070675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.070848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.070857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.071108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.071118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.071356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.071425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.071587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.071621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.071913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.071944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.072126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.072156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.072328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.072372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.072597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.072612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.072861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.072876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.073154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.073168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.073328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.073348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.073500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.073529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.073799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.073828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.074069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.074099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.074390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.074425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.074706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.375 [2024-07-16 01:32:41.074735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.375 qpair failed and we were unable to recover it. 00:27:15.375 [2024-07-16 01:32:41.074954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.074964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.075164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.075173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.075351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.075361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.075515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.075525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.075706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.075715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.075856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.075865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.076066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.076078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.076233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.076242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.076372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.076382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.076603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.076612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.076755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.076765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.076987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.076997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.077126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.077155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.077422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.077452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.077738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.077747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.077945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.077954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.078088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.078098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.078240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.078249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.078473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.078484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.078650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.078659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.078736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.078746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.078887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.078896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.079097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.079106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.079302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.079312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.079558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.079568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.079728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.079738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.079915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.079951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.080217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.080246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.080457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.080488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.080671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.080681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.080928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.080958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.081144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.081173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.081424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.081455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.081689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.081699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.081875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.081885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.082027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.082037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.082210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.082239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.082428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.082462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.082652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.082682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.376 qpair failed and we were unable to recover it. 00:27:15.376 [2024-07-16 01:32:41.082962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.376 [2024-07-16 01:32:41.082992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.083176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.083205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.083495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.083526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.083776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.083806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.083942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.083972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.084183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.084213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.084402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.084432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.084697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.084733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.084953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.084963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.085159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.085169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.085328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.085342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.085550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.085580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.085764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.085793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.085971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.086000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.086264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.086293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.086523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.086557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.086735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.086745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.086970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.087000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.087237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.087266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.087461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.087491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.087674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.087683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.087912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.087942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.088065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.088094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.088320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.088360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.088571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.088581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.088826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.088855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.089076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.089105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.089396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.089427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.089695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.089725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.089919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.089948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.090142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.090171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.090417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.090451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.090645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.090676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.090914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.090943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.091276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.091310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.091628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.091695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.092007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.092074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.092277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.092311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.092637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.092669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.092796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.377 [2024-07-16 01:32:41.092811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.377 qpair failed and we were unable to recover it. 00:27:15.377 [2024-07-16 01:32:41.092903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.092918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.093077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.093092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.093333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.093354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.093607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.093618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.093788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.093798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.094017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.094027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.094133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.094142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.094358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.094371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.094537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.094546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.094743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.094753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.094834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.094844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.094930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.094938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.095082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.095092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.095293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.095303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.095439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.095449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.095593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.095602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.095813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.095843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.096018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.096047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.096306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.096336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.096481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.096511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.096792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.096822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.096964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.096973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.097126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.097136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.097270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.097280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.097498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.097508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.097668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.097677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.097743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.097752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.097900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.097910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.097988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.097997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.098200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.098209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.098399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.098410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.098477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.098486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.098644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.098654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.098877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.098886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.099030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.099040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.099178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.099188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.099300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.099309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.099476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.099486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.099586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.099594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.099754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.099764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.099987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.378 [2024-07-16 01:32:41.099997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.378 qpair failed and we were unable to recover it. 00:27:15.378 [2024-07-16 01:32:41.100128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.100137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.100223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.100232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.100315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.100323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.100463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.100473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.100617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.100626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.100787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.100796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.100930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.100941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.101072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.101082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.101279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.101308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.101459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.101502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.101777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.101808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.102071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.102086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.102270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.102285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.102461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.102476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.102698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.102729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.103032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.103062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.103264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.103294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.103563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.103594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.103878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.103908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.104162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.104177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.104349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.104365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.104522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.104536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.104694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.104709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.104870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.104884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.104978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.104990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.105212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.105222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.105467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.105477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.105653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.105663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.105804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.105813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.106030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.106058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.106317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.106361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.106516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.106547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.106724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.106754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.106939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.106971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.107195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.107229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.107499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.107530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.107812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.107826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.108006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.108020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.379 [2024-07-16 01:32:41.108113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.379 [2024-07-16 01:32:41.108127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.379 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.108281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.108293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.108374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.108383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.108557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.108567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.108645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.108654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.108748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.108757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.108988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.109012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.109220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.109249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.109450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.109487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.109723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.109732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.109936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.109946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.110087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.110096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.110261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.110271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.110514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.110525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.110759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.110769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.110921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.110931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.111091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.111101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.111186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.111195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.111346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.111357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.111559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.111569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.111802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.111812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.112010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.112020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.112243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.112253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.112404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.112415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.112641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.112651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.112874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.112883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.113035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.113044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.113273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.113302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.113508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.113539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.113807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.113837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.114003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.114013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.114183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.114192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.114381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.114416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.114660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.114690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.114828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.114839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.115061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.115073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.115269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.380 [2024-07-16 01:32:41.115278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.380 qpair failed and we were unable to recover it. 00:27:15.380 [2024-07-16 01:32:41.115411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.115421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.115637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.115647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.115870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.115880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.116087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.116116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.116301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.116331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.116601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.116611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.116808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.116818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.116964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.116973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.117171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.117180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.117348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.117358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.117504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.117514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.117684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.117713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.117986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.118017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.118207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.118236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.118500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.118534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.118740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.118769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.119012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.119021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.119219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.119228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.119375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.119385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.119465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.119474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.119693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.119703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.119929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.119958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.120219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.120249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.120502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.120533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.120716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.120726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.120790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.120799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.121024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.121034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.121259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.121300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.121577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.121608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.121857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.121867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.122016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.122025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.122169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.122198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.122304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.122334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.122566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.122598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.122906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.122936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.123133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.123162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.123369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.123401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.123665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.123695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.123965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.124008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.124097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.124106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.124322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.381 [2024-07-16 01:32:41.124332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.381 qpair failed and we were unable to recover it. 00:27:15.381 [2024-07-16 01:32:41.124529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.124539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.124670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.124679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.124811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.124820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.124958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.124967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.125110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.125119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.125344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.125353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.125516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.125526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.125771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.125801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.125986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.126015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.126222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.126251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.126439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.126474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.126689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.126720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.126844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.126874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.127069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.127079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.127247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.127256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.127391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.127401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.127482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.127491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.127654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.127663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.127740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.127749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.127880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.127889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.128131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.128140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.128225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.128234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.128450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.128459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.128605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.128615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.128744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.128754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.128898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.128927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.129166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.129195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.129304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.129334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.129604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.129634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.129881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.129910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.130093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.130103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.130273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.130282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.130446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.130458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.130699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.130708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.130874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.130884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.131106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.131115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.131268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.131278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.131353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.131364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.131622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.131632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.131859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.131869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.382 [2024-07-16 01:32:41.132094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.382 [2024-07-16 01:32:41.132123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.382 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.132377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.132407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.132543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.132572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.132762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.132791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.132980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.133009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.133195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.133224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.133417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.133447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.133715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.133725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.133932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.133941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.134025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.134034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.134230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.134239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.134486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.134498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.134577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.134586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.134667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.134676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.134811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.134820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.134954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.134963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.135181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.135191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.135344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.135354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.135488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.135498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.135631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.135640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.135838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.135847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.136003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.136012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.136237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.136247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.136487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.136497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.136716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.136726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.136950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.136959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.137091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.137101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.137230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.137240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.137376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.137386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.137632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.137642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.137780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.137789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.138028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.138037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.138215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.138225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.138445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.138457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.138630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.138640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.138789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.138798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.138982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.138992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.139138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.139150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.139291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.139301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.139515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.139525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.139617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.139626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.383 [2024-07-16 01:32:41.139828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.383 [2024-07-16 01:32:41.139838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.383 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.140059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.140068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.140295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.140305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.140500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.140510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.140659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.140669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.140830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.140840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.140940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.140949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.141112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.141122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.141251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.141260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.141435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.141445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.141606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.141615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.141767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.141776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.141938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.141948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.142146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.142155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.142331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.142385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.142651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.142681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.142947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.142976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.143273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.143302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.143574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.143604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.143846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.143875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.144115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.144144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.144417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.144448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.144579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.144604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.144773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.144783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.145035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.145065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.145318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.145356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.145558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.145588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.145879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.145908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.146033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.146062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.146267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.146296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.146581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.146614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.146807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.146817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.147017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.147027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.147224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.147234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.147418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.147429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.147602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.384 [2024-07-16 01:32:41.147633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.384 qpair failed and we were unable to recover it. 00:27:15.384 [2024-07-16 01:32:41.147701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.147712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.147804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.147813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.147980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.147989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.148134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.148144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.148293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.148302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.148477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.148488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.148641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.148651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.148825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.148834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.148980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.148989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.149200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.149229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.149508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.149539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.149729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.149759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.150016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.150026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.150227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.150236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.150463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.150475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.150607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.150616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.150838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.150848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.150941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.150950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.151044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.151053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.151251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.151261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.151402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.151412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.151626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.151636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.151859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.151869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.152108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.152138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.152267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.152296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.152548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.152580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.152784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.152793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.152938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.152948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.153114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.153123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.153355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.153365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.153567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.153576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.153775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.153785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.153981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.153990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.154195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.154222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.154412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.154446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.154630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.154660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.154904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.154934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.155170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.155199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.155394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.155425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.155573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.155603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.385 qpair failed and we were unable to recover it. 00:27:15.385 [2024-07-16 01:32:41.155786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.385 [2024-07-16 01:32:41.155823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.156081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.156091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.156224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.156234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.156379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.156388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.156600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.156608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.156822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.156851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.157030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.157059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.157318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.157361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.157553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.157584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.157828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.157858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.158041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.158071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.158277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.158288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.158448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.158461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.158559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.158569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.158838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.158869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.159099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.159130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.159308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.159348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.159572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.159605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.159845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.159876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.160152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.160183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.160372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.160405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.160659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.160690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.160891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.160901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.161127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.161158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.161294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.161325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.161550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.161589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.161764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.161791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.162004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.162015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.162169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.162180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.162342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.162354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.162572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.162604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.162795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.162826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.162961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.162991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.163160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.163172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.163425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.163437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.163640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.163672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.163861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.163891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.164173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.164204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.164419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.164450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.164663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.164694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.386 qpair failed and we were unable to recover it. 00:27:15.386 [2024-07-16 01:32:41.164888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.386 [2024-07-16 01:32:41.164924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.165114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.165144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.165331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.165347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.165437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.165448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.165591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.165603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.165737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.165749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.165827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.165837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.165912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.165922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.166065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.166076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.166154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.166164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.166400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.166431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.166609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.166639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.166837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.166867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.167068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.167099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.167288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.167319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.167542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.167576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.167812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.167823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.167986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.168016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.168264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.168294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.168521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.168553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.168791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.168821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.169059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.169071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.169238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.169269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.169389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.169420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.169565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.169595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.169720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.169751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.169999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.170029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.170261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.170292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.170544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.170576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.170847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.170878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.171057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.171068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.171247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.171277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.171476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.171511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.171638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.171669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.171935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.171965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.172176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.172187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.172412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.172424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.172511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.172540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.172749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.172780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.172954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.172984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.173221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.387 [2024-07-16 01:32:41.173234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.387 qpair failed and we were unable to recover it. 00:27:15.387 [2024-07-16 01:32:41.173424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.173456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.173595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.173626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.173761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.173793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.173974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.174007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.174215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.174227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.174454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.174466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.174537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.174547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.174773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.174784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.174851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.174861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.175033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.175063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.175261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.175292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.175549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.175583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.175779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.175790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.175993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.176005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.176149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.176161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.176398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.176429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.176630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.176661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.176846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.176857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.177008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.177019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.177183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.177213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.177406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.177438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.177732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.177763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.177895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.177926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.178208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.178220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.178470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.178481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.178609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.178620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.178717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.178728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.178928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.178939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.179145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.179177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.179460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.179496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.179745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.179776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.179914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.179945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.180188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.180220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.180463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.180496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.180606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.180636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.388 qpair failed and we were unable to recover it. 00:27:15.388 [2024-07-16 01:32:41.180922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.388 [2024-07-16 01:32:41.180953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.181211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.181241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.181498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.181530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.181719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.181750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.181996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.182037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.182237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.182248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.182400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.182432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.182677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.182708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.182886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.182918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.183048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.183078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.183356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.183388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.183582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.183617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.183793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.183824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.184099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.184130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.184307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.184349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.184558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.184591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.184905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.184937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.185187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.185218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.185513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.185545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.185746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.185778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.185912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.185942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.186249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.186279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.186432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.186464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.186588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.186620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.186812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.186843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.187066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.187096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.187261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.187272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.187450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.187462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.187628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.187639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.187775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.187786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.187946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.187977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.188264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.188334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.188517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.188552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.188741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.188772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.188962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.188995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.189213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.189229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.189414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.189431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.189675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.189691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.189863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.189879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.190049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.190080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.190261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.190291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.389 qpair failed and we were unable to recover it. 00:27:15.389 [2024-07-16 01:32:41.190578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.389 [2024-07-16 01:32:41.190610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.190787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.190827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.190938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.190954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.191196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.191218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.191380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.191413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.191588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.191619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.191828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.191858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.192111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.192141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.192400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.192432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.192639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.192669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.192933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.192964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.193155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.193185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.193469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.193502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.193644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.193674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.193986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.194016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.194201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.194233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.194445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.194476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.194629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.194660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.194845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.194860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.195072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.195088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.195239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.195270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.195407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.195438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.195707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.195745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.195890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.195906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.196077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.196107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.196369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.196400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.196696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.196727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.197016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.197047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.197314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.197329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.197493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.197510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.197681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.197698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.197940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.197970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.198241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.198271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.198539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.198571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.198695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.198725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.198989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.199020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.199215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.199231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.199407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.199424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.199644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.199675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.199804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.199835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.390 [2024-07-16 01:32:41.200015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.390 [2024-07-16 01:32:41.200046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.390 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.200248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.200265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.200418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.200450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.200629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.200666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.200909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.200939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.201229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.201260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.201538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.201569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.201810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.201842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.201966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.201996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.202194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.202225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.202509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.202541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.202791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.202822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.203117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.203133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.203368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.203384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.203612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.203628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.203868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.203884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.204122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.204139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.204349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.204366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.204505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.204521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.204608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.204622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.204835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.204850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.204946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.204961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.205179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.205210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.205408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.205440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.205711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.205741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.206029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.206059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.206283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.206313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.206612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.206642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.206832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.206863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.206999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.207029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.207294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.207327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.207585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.207615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.207896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.207926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.208200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.208216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.208369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.208385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.208526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.208542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.208692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.208708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.208940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.208956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.209203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.209219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.209455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.209488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.209761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.209801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.391 qpair failed and we were unable to recover it. 00:27:15.391 [2024-07-16 01:32:41.209979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.391 [2024-07-16 01:32:41.209995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.210258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.210289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.210445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.210483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.210726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.210757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.210948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.210978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.211194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.211225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.211435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.211452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.211667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.211699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.211908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.211938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.212178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.212209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.212463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.212494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.212687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.212718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.212908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.212924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.213078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.213108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.213294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.213324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.213543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.213573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.213719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.213750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.214017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.214048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.214236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.214266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.214536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.214567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.214827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.214858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.215107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.215138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.215432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.215463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.215738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.215769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.215949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.215979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.216195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.216225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.216487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.216518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.216756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.216786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.216978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.217008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.217262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.217332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.217628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.217698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.218180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.218250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.218426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.218447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.218660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.218678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.218841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.218857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.219001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.392 [2024-07-16 01:32:41.219016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.392 qpair failed and we were unable to recover it. 00:27:15.392 [2024-07-16 01:32:41.219280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.219320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.219525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.219558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.219832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.219863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.220154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.220184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.220448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.220459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.220560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.220570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.220783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.220814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.221043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.221075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.221277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.221307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.221513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.221548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.221825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.221864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.222035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.222052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.222264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.222297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.222597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.222628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.222920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.222952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.223222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.223252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.223476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.223508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.223771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.223802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.223992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.224024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.224306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.224344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.224594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.224630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.224842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.224874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.225103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.225119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.225266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.225283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.225456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.225495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.225743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.225775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.225953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.225984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.226179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.226211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.226455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.226488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.226691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.226722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.226906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.226936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.227153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.227164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.227393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.227425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.227715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.227752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.227931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.227942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.228154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.228185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.228472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.228504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.228777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.228808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.229051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.229094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.229229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.229240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.229408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.393 [2024-07-16 01:32:41.229444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.393 qpair failed and we were unable to recover it. 00:27:15.393 [2024-07-16 01:32:41.229706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.229737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.230031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.230063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.230273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.230284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.230441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.230474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.230742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.230773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.231054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.231085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.231366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.231397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.231596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.231628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.231882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.231912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.232198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.232229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.232438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.232471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.232736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.232767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.233056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.233086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.233345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.233389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.233678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.233710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.233933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.233945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.234093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.234125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.234354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.234386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.234588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.234620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.234956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.235026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.235297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.235315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.235502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.235519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.235681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.235712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.235905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.235935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.236198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.236228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.236429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.236462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.236677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.236708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.236895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.236926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.237191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.237221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.237410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.237442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.237634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.237664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.237870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.237901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.238139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.238159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.238352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.238384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.238677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.238709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.238977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.239007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.239195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.239226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.239496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.239528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.239811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.239841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.240126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.240157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.240349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.394 [2024-07-16 01:32:41.240381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.394 qpair failed and we were unable to recover it. 00:27:15.394 [2024-07-16 01:32:41.240586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.240617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.240812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.240843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.241108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.241139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.241367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.241384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.241486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.241502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.241734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.241765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.241976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.242007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.242262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.242293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.242502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.242535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.242793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.242823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.243024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.243040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.243277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.243307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.243525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.243557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.243744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.243775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.243956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.243986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.244184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.244215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.244403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.244420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.244663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.244693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.244882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.244898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.245068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.245098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.245360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.245392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.245586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.245617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.245880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.245922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.246034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.246051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.246218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.246249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.246508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.246540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.246721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.246751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.246939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.246968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.247221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.247253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.247453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.247469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.247674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.247691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.247935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.247976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.248160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.248190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.248445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.248478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.248667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.248696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.248870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.248900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.249028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.249058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.249284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.249315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.249512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.249544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.249724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.249754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.249995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.395 [2024-07-16 01:32:41.250025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.395 qpair failed and we were unable to recover it. 00:27:15.395 [2024-07-16 01:32:41.250291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.250321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.250514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.250531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.250682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.250698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.250862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.250878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.251148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.251165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.251325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.251365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.251519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.251549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.251815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.251847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.252040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.252085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.252226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.252242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.252487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.252504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.252777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.252807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.253009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.253039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.253163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.253193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.253377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.253409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.253629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.253659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.253929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.253960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.254229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.254260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.254393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.254410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.254552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.254568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.254763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.254779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.255019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.255051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.255245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.255276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.255471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.255503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.255770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.255800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.256009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.256040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.256303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.256334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.256566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.256582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.256741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.256757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.256913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.256930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.257112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.257128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.257393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.257425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.257619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.257650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.257941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.257972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.258175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.258192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.258289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.258304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.258392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.258407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.258638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.258655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.258745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.258760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.258937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.258954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.396 [2024-07-16 01:32:41.259170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.396 [2024-07-16 01:32:41.259201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.396 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.259399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.259430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.259673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.259704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.259973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.260004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.260287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.260304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.260399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.260415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.260644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.260661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.260804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.260821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.260980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.261011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.261279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.261309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.261618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.261650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.261909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.261940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.262183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.262213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.262387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.262428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.262661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.262677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.262912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.262928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.263084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.263101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.263350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.263389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.263525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.263556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.263842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.263873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.264066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.264096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.264369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.264401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.264604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.264634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.264882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.264913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.265114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.265145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.265282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.265314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.265617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.265648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.265909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.265940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.266124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.266141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.266293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.266310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.266477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.266494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.266604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.266635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.266840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.266871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.267142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.267173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.267358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.267375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.267585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.397 [2024-07-16 01:32:41.267602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.397 qpair failed and we were unable to recover it. 00:27:15.397 [2024-07-16 01:32:41.267781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.267798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.268027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.268043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.268283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.268313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.268567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.268599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.268775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.268806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.269002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.269018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.269251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.269267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.269455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.269472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.269708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.269723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.269968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.269982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.270222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.270248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.270509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.270536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.270779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.270811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.270953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.270981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.271227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.271255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.271459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.271469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.271666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.271675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.271917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.271926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.272149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.272158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.272334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.272351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.272543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.272572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.272765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.272801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.273081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.273108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.273398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.273427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.398 [2024-07-16 01:32:41.273665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.398 [2024-07-16 01:32:41.273694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.398 qpair failed and we were unable to recover it. 00:27:15.680 [2024-07-16 01:32:41.273959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.680 [2024-07-16 01:32:41.273970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.680 qpair failed and we were unable to recover it. 00:27:15.680 [2024-07-16 01:32:41.274117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.680 [2024-07-16 01:32:41.274127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.680 qpair failed and we were unable to recover it. 00:27:15.680 [2024-07-16 01:32:41.274328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.680 [2024-07-16 01:32:41.274343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.680 qpair failed and we were unable to recover it. 00:27:15.680 [2024-07-16 01:32:41.274416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.274426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.274534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.274544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.274744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.274755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.274955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.274965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.275193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.275204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.275347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.275358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.275587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.275598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.275825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.275837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.276032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.276043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.276196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.276208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.276364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.276377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.276626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.276637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.276862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.276874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.277119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.277130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.277285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.277297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.277449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.277461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.277716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.277728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.277911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.277922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.278130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.278142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.278219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.278229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.278433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.278445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.278535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.278547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.278687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.278699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.278899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.278911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.279082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.279095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.279230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.279242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.279347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.279359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.279511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.279524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.279675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.279688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.279761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.279773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.280000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.280012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.280163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.280175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.280281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.280293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.280379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.280394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.280539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.280552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.280722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.280751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.280898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.280911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.281096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.281124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.281355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.281369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.681 [2024-07-16 01:32:41.281459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.681 [2024-07-16 01:32:41.281472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.681 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.281620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.281632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.281730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.281741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.281823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.281834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.282038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.282050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.282115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.282125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.282291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.282303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.282394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.282405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.282621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.282633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.282782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.282794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.283011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.283023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.283122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.283133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.283210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.283221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.283443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.283455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.283659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.283671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.283836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.283848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.284057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.284068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.284229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.284240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.284388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.284401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.284654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.284665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.284832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.284843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.285019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.285030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.285279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.285291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.285373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.285383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.285522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.285534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.285678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.285689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.285855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.285866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.286037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.286049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.286278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.286290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.289487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.289503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.289650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.289661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.289813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.289824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.290077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.290089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.290316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.290327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.290499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.290537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.290708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.290727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.290885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.290902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.291047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.291063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.291217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.291235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.291482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.291500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.291668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.682 [2024-07-16 01:32:41.291682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.682 qpair failed and we were unable to recover it. 00:27:15.682 [2024-07-16 01:32:41.291778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.291789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.291933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.291945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.292114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.292125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.292271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.292282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.292521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.292533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.292745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.292756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.292925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.292936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.293031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.293041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.293209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.293220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.293440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.293452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.293603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.293615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.293682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.293692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.293779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.293790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.294016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.294028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.294236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.294248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.294426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.294438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.294707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.294719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.294948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.294960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.295206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.295218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.295360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.295372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.295517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.295529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.295754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.295766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.295925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.295937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.296026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.296036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.296248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.296259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.296469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.296482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.296566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.296577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.296670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.296681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.296778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.296789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.296870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.296881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.296962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.296973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.297125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.297136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.297293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.297304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.297462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.297477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.297624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.297635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.297799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.297810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.297956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.297967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.298122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.298134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.298282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.298293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.298437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.683 [2024-07-16 01:32:41.298449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.683 qpair failed and we were unable to recover it. 00:27:15.683 [2024-07-16 01:32:41.298610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.298621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.298772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.298784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.298919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.298931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.299175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.299187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.299276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.299286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.299375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.299386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.299468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.299478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.299691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.299703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.299840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.299852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.300067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.300078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.300221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.300232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.300379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.300391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.300539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.300551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.300729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.300740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.300833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.300844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.301044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.301056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.301200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.301212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.301361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.301373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.301587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.301598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.301776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.301788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.301932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.301943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.302076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.302087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.302252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.302263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.302356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.302367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.302477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.302488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.302720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.302732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.302822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.302832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.303052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.303063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.303309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.303321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.303536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.303548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.303746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.303758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.303911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.303922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.304059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.304070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.304169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.304181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.304430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.304443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.304583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.304595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.304753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.304765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.304928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.304958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.305169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.305199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.305390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.684 [2024-07-16 01:32:41.305422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.684 qpair failed and we were unable to recover it. 00:27:15.684 [2024-07-16 01:32:41.305678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.305689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.305831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.305842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.305995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.306026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.306227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.306258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.306524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.306557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.306749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.306780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.307022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.307052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.307284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.307296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.307513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.307525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.307724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.307735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.307955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.307966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.308112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.308123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.308331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.308385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.308593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.308623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.308869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.308900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.309094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.309125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.309331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.309386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.309566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.309596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.309839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.309868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.310130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.310141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.310411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.310444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.310713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.310744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.311014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.311045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.311284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.311314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.311468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.311479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.311681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.311692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.311930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.311959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.312204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.312234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.312406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.312419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.312578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.312589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.312780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.312811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.313103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.313133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.685 [2024-07-16 01:32:41.313376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.685 [2024-07-16 01:32:41.313407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.685 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.313613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.313649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.313896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.313927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.314068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.314100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.314233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.314244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.314392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.314403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.314560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.314571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.314735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.314766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.315035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.315065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.315258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.315298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.315497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.315509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.315695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.315727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.315935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.315965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.316139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.316171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.316346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.316360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.316518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.316530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.316671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.316682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.316819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.316849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.316990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.317021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.317235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.317266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.317448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.317460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.317679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.317691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.317914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.317925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.318143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.318154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.318360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.318393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.318526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.318557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.318842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.318872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.318995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.319026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.319299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.319330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.319533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.319544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.319723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.319754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.320055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.320087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.320361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.320403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.320550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.320562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.320764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.320775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.320910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.320921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.321098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.321129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.321421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.321453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.321659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.321670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.321869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.321881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.322141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.686 [2024-07-16 01:32:41.322162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.686 qpair failed and we were unable to recover it. 00:27:15.686 [2024-07-16 01:32:41.322370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.322384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.322535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.322547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.322702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.322732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.322856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.322887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.323064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.323094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.323275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.323306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.323527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.323558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.323799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.323830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.324100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.324111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.324276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.324307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.324525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.324560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.324755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.324786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.325030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.325061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.325255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.325266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.325412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.325424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.325618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.325649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.325903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.325934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.326197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.326228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.326442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.326475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.326703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.326734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.326913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.326943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.327134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.327165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.327305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.327317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.327534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.327546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.327680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.327691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.327899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.327910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.328158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.328169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.328416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.328445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.328656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.328687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.328924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.328954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.329182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.329194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.329408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.329439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.329572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.329603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.329854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.329885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.330065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.330096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.330314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.330357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.330551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.330562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.330761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.330772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.330969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.330981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.331208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.331219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.687 qpair failed and we were unable to recover it. 00:27:15.687 [2024-07-16 01:32:41.331417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.687 [2024-07-16 01:32:41.331432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.331569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.331580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.331668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.331678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.331873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.331904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.332076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.332107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.332302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.332313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.332464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.332499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.332753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.332784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.332905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.332935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.333069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.333100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.333352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.333364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.333532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.333563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.333737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.333768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.333947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.333977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.334269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.334301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.334499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.334531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.334806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.334817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.335016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.335028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.335259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.335290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.335566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.335599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.335806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.335837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.336087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.336118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.336283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.336295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.336453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.336488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.336758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.336789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.336984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.337016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.337277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.337307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.337565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.337599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.337870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.337901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.338111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.338141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.338410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.338442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.338651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.338681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.338927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.338958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.339144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.339174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.339419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.339451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.339580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.339591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.339743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.339755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.339974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.339985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.340209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.340220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.340295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.340305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.340441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.688 [2024-07-16 01:32:41.340455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.688 qpair failed and we were unable to recover it. 00:27:15.688 [2024-07-16 01:32:41.340657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.340668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.340916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.340927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.341091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.341102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.341175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.341185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.341407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.341420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.341654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.341665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.341952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.341983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.342181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.342211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.342461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.342473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.342610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.342621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.342782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.342812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.343070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.343101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.343315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.343354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.343587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.343599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.343825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.343837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.344007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.344018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.344218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.344229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.344436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.344471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.344726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.344758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.344883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.344914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.345181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.345211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.345414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.345447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.345702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.345732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.345986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.346017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.346291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.346322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.346611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.346642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.346891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.346962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.347207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.347276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.347510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.347547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.347789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.347821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.347964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.347995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.689 [2024-07-16 01:32:41.348288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.689 [2024-07-16 01:32:41.348319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.689 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.348572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.348603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.348846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.348876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.349147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.349177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.349391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.349437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.349599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.349615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.349803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.349833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.350020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.350051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.350315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.350364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.350651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.350682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.350885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.350915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.351088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.351119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.351305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.351321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.351439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.351455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.351692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.351709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.351930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.351946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.352207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.352239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.352416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.352447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.352591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.352621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.352889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.352920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.353233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.353263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.353469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.353500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.353775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.353807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.354100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.354131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.354377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.354394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.354496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.354527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.354770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.354800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.355131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.355161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.355430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.355461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.355598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.355630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.355819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.355849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.356092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.356122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.356382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.356398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.356630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.356647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.356855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.356872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.357145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.357162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.357397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.357414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.357564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.357580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.357778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.357795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.358034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.358064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.358353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.690 [2024-07-16 01:32:41.358385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.690 qpair failed and we were unable to recover it. 00:27:15.690 [2024-07-16 01:32:41.358636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.358667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.358786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.358817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.359003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.359033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.359248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.359279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.359552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.359583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.359778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.359795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.360031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.360047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.360199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.360235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.360424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.360456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.360727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.360759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.360949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.360980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.361261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.361291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.361507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.361539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.361761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.361793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.361992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.362022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.362238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.362254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.362498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.362530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.362711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.362742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.362931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.362961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.363141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.363172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.363352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.363369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.363588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.363619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.363860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.363891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.364135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.364165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.364435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.364467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.364712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.364743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.364953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.364984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.365258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.365289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.365569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.365600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.365880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.365910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.366098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.366128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.366423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.366454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.366705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.366735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.367046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.367076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.367310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.367394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.367613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.367648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.367864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.367896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.368163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.368194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.368391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.368424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.368555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.368571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.691 [2024-07-16 01:32:41.368725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.691 [2024-07-16 01:32:41.368769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.691 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.369020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.369052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.369238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.369267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.369473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.369489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.369717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.369748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.369974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.370005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.370200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.370232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.370415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.370446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.370697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.370728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.370971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.371002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.371263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.371293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.371481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.371498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.371643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.371675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.371861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.371891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.372146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.372178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.372438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.372454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.372671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.372687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.372789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.372805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.372985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.373017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.373285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.373316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.373519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.373552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.373747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.373782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.373964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.373994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.374257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.374287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.374487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.374519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.374775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.374805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.375073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.375103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.375387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.375403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.375552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.375569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.375778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.375794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.376046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.376078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.376294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.376325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.376516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.376547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.376814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.376844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.377028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.377059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.377281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.377312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.377519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.377559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.377701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.377717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.377821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.377866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.378106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.378137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.378410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.378443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.692 [2024-07-16 01:32:41.378685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.692 [2024-07-16 01:32:41.378716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.692 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.378979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.379009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.379254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.379286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.379538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.379570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.379739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.379755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.379984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.380000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.380238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.380268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.380562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.380599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.380865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.380895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.381082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.381113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.381381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.381399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.381644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.381676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.381921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.381951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.382174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.382205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.382359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.382392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.382661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.382693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.382902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.382932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.383179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.383210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.383475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.383491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.383668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.383685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.383848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.383865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.384058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.384090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.384428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.384461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.384723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.384740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.384847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.384863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.384973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.384989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.385225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.385241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.385389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.385406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.385626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.385657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.385927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.385958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.386171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.386201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.386498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.386531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.386795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.386812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.386980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.386997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.387176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.387193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.387368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.387386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.387628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.693 [2024-07-16 01:32:41.387669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.693 qpair failed and we were unable to recover it. 00:27:15.693 [2024-07-16 01:32:41.387946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.387977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.388112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.388142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.388408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.388425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.388609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.388626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.388871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.388888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.389082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.389099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.389348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.389364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.389471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.389488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.389634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.389651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.389807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.389824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.389990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.390006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.390196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.390213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.390429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.390447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.390658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.390675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.390907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.390924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.391184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.391200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.391298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.391314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.391599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.391616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.391769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.391785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.391956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.391973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.392066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.392081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.392176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.392191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.392335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.392359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.392526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.392543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.392781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.392797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.393027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.393044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.393257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.393273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.393487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.393505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.393693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.393710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.393941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.393957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.394104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.394120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.394285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.394302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.394510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.394527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.394618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.394632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.394791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.394807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.394980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.394997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.395228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.395245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.395482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.395499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.694 [2024-07-16 01:32:41.395660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.694 [2024-07-16 01:32:41.395679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.694 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.395916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.395932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.396141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.396157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.396310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.396326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.396494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.396511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.396771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.396788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.396969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.396985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.397261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.397278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.397476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.397493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.397647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.397664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.397766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.397781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.397927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.397943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.398154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.398170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.398257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.398272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.398455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.398472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.398661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.398677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.398918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.398935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.399172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.399189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.399351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.399368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.399530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.399546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.399770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.399786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.399998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.400014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.400267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.400284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.400527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.400544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.400755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.400772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.401027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.401044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.401210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.401226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.401467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.401487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.401586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.401601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.401813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.401829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.402020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.402036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.402179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.402195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.402353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.402373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.402531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.402548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.402805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.402822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.403035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.403052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.403216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.403233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.403470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.403487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.403723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.403740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.403977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.695 [2024-07-16 01:32:41.403993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.695 qpair failed and we were unable to recover it. 00:27:15.695 [2024-07-16 01:32:41.404237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.404254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.404494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.404511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.404659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.404676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.404909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.404925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.405091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.405107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.405349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.405366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.405459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.405474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.405732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.405748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.405959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.405975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.406231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.406248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.406508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.406525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.406688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.406704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.406812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.406829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.407054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.407071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.407220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.407242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.407398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.407414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.407601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.407618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.407813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.407829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.407989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.408005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.408158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.408174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.408347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.408363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.408575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.408592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.408778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.408793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.408948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.408964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.409202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.409219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.409377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.409393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.409629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.409661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.409923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.409954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.410294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.410375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.410664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.410677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.410887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.410898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.410993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.411003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.411233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.411265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.411470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.411500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.411705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.411736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.411925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.411955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.696 qpair failed and we were unable to recover it. 00:27:15.696 [2024-07-16 01:32:41.412233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.696 [2024-07-16 01:32:41.412264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.412555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.412591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.412788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.412820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.413065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.413097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.413275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.413305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.413610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.413652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.413879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.413890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.414041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.414053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.414292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.414323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.414538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.414570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.414809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.414839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.414970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.415001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.415271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.415302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.415580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.415612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.415730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.415762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.415977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.416008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.416210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.416241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.416496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.416531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.416825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.416856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.417126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.417158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.417348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.417360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.417567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.417599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.417842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.417873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.418067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.418098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.418377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.418408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.418653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.418685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.418885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.418915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.419133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.419164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.419354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.419386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.419575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.419607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.419737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.419748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.419892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.419903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.420168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.420237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.420595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.420634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.420918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.697 [2024-07-16 01:32:41.420937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.697 qpair failed and we were unable to recover it. 00:27:15.697 [2024-07-16 01:32:41.421202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.421238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.421522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.421553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.421824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.421835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.422007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.422019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.422257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.422288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.422490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.422522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.422754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.422765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.423024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.423054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.423311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.423352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.423558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.423570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.423735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.423772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.424044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.424074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.424365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.424408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.424679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.424709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.424973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.425003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.425269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.425301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.425599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.425611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.425766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.425797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.425987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.426018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.426208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.426239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.426525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.426563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.426857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.426887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.427159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.427190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.427431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.698 [2024-07-16 01:32:41.427462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.698 qpair failed and we were unable to recover it. 00:27:15.698 [2024-07-16 01:32:41.427730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.427761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.428017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.428048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.428291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.428322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.428600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.428612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.428818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.428830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.428974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.428985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.429228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.429260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.429529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.429562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.429844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.429855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.430131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.430161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.430365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.430397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.430502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.430512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.430646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.430656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.430859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.430903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.431047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.431080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.431275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.431307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.431611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.431644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.431838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.431869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.432046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.432077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.432357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.432398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.432653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.432684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.432906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.432937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.433186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.433216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.433483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.433515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.433785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.433815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.434085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.434115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.434354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.434404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.434638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.434650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.434812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.434823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.434909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.434919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.435076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.435087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.435308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.435319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.435493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.435504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.435724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.435735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.435812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.699 [2024-07-16 01:32:41.435822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.699 qpair failed and we were unable to recover it. 00:27:15.699 [2024-07-16 01:32:41.436073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.436103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.436367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.436400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.436513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.436523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.436612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.436622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.436711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.436721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.437000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.437032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.437347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.437379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.437529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.437560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.437766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.437796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.437934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.437965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.438230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.438260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.438456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.438469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.438624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.438655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.438860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.438891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.439157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.439188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.439480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.439511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.439783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.439814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.440030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.440060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.440310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.440349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.440629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.440659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.440801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.440831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.441100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.441131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.441318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.441356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.441606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.441619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.441845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.441876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.441995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.442026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.442244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.442275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.442547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.442561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.442785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.442797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.442960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.442972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.443067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.443102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.443365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.443403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.443603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.443615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.443794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.443824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.444001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.444030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.700 [2024-07-16 01:32:41.444211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.700 [2024-07-16 01:32:41.444241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.700 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.444544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.444556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.444786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.444817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.445070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.445100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.445280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.445311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.445521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.445552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.445813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.445843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.446029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.446059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.446243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.446275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.446490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.446525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.446712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.446743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.446867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.446888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.447086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.447116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.447376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.447407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.447535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.447566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.447784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.447814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.448022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.448052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.448303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.448333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.448473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.448504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.448698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.448729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.448972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.449003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.449186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.449217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.449413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.449445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.449616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.449627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.449701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.449712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.449821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.449850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.449983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.450015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.450199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.450229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.450479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.450491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.450693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.450705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.450852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.450863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.451023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.451034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.451177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.451215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.451378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.451411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.451658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.451688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.701 [2024-07-16 01:32:41.451944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.701 [2024-07-16 01:32:41.451974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.701 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.452164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.452194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.452469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.452500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.452747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.452777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.453046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.453057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.453223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.453234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.453395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.453427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.453685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.453717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.453903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.453933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.454057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.454088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.454333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.454388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.454563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.454575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.454779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.454810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.454990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.455021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.455290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.455320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.455624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.455656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.455850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.455881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.456060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.456091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.456285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.456316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.456501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.456512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.456697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.456727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.456927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.456958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.457207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.457237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.457425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.457457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.457697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.457708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.457967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.457997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.458197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.458227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.458462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.458497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.458663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.458677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.458910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.458941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.459234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.459264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.459457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.459489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.459703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.459733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.459931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.459961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.460266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.460296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.460548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.460579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.702 qpair failed and we were unable to recover it. 00:27:15.702 [2024-07-16 01:32:41.460806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.702 [2024-07-16 01:32:41.460818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.460968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.460980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.461080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.461109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.461236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.461267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.461388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.461420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.461663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.461694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.461970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.462001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.462210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.462241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.462513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.462547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.462819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.462855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.463017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.463028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.463170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.463181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.463392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.463404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.463479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.463489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.463662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.463673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.463888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.463918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.464026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.464056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.464195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.464225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.464476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.464508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.464701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.464732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.464860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.464891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.465092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.465122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.465366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.465399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.465641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.465671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.465851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.465880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.466059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.466089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.466359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.466401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.466594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.466625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.466883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.466895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.467109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.703 [2024-07-16 01:32:41.467121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.703 qpair failed and we were unable to recover it. 00:27:15.703 [2024-07-16 01:32:41.467370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.467402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.467667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.467679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.467914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.467956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.468136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.468167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.468436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.468475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.468690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.468701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.468938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.468969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.469148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.469178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.469372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.469403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.469530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.469542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.469773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.469804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.470069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.470099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.470233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.470264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.470399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.470434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.470642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.470673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.470880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.470911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.471114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.471145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.471285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.471316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.471585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.471596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.471739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.471750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.471966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.471997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.472216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.472247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.472510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.472541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.472724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.472755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.473016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.473047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.473237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.473268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.473475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.473506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.473748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.473779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.474023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.474054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.474210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.474241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.474455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.474490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.474618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.474650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.474917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.474948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.475241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.475271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.475547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.704 [2024-07-16 01:32:41.475580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.704 qpair failed and we were unable to recover it. 00:27:15.704 [2024-07-16 01:32:41.475766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.475798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.475995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.476026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.476231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.476262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.476455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.476486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.476674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.476705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.476970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.477001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.477181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.477212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.477358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.477396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.477614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.477645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.477760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.477790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.478047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.478058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.478233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.478245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.478456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.478468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.478685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.478716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.478914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.478945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.479136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.479167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.479301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.479332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.479459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.479490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.479787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.479818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.480076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.480107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.480370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.480403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.480695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.480706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.480796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.480806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.481010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.481041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.481284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.481314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.481531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.481562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.481826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.481857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.482042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.482072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.482316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.482360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.482647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.482679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.482883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.482894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.483046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.483057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.483230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.483242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.483406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.483418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.483594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.483624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.705 [2024-07-16 01:32:41.483742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.705 [2024-07-16 01:32:41.483773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.705 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.484029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.484060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.484355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.484387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.484662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.484698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.484919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.484931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.485133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.485162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.485348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.485379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.485640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.485652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.485854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.485866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.486042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.486073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.486253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.486283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.486519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.486554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.486853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.486890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.487171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.487201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.487484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.487516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.487799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.487830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.488010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.488042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.488253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.488283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.488556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.488588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.488737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.488767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.488962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.488994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.489306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.489345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.489556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.489567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.489796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.489808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.489905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.489916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.490149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.490180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.490334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.490385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.490586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.490617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.490821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.490853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.491042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.491054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.491209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.491232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.491458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.491490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.491686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.491716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.491983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.492013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.492271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.492301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.492552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.492564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.492649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.706 [2024-07-16 01:32:41.492659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.706 qpair failed and we were unable to recover it. 00:27:15.706 [2024-07-16 01:32:41.492835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.492865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.493044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.493074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.493405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.493438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.493617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.493649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.493843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.493874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.494115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.494127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.494334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.494386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.494603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.494635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.494821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.494851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.495024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.495056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.495259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.495290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.495635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.495666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.495969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.496000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.496257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.496288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.496588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.496600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.496762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.496798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.497042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.497073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.497323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.497364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.497566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.497597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.497860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.497872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.498099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.498110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.498356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.498399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.498587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.498617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.498848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.498879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.499146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.499176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.499479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.499512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.499653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.499684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.499971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.499982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.500228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.500269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.500529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.500561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.500755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.500786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.500928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.500939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.501159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.501189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.501376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.501407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.501538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.707 [2024-07-16 01:32:41.501569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.707 qpair failed and we were unable to recover it. 00:27:15.707 [2024-07-16 01:32:41.501843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.501854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.502005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.502017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.502196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.502208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.502369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.502408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.502590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.502622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.502735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.502766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.502950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.502961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.503193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.503224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.503472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.503505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.503769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.503800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.504094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.504125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.504400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.504431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.504678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.504709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.504976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.505007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.505301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.505333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.505518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.505549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.505679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.505691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.505846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.505858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.506000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.506012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.506161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.506192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.506379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.506420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.506633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.506678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.506883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.506894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.507099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.507110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.507196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.507206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.507388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.507420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.507719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.507749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.508014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.508045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.508286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.708 [2024-07-16 01:32:41.508316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.708 qpair failed and we were unable to recover it. 00:27:15.708 [2024-07-16 01:32:41.508477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.508508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.508775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.508805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.508993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.509030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.509282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.509293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.509443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.509455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.509627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.509659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.509920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.509951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.510132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.510163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.510360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.510400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.510643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.510655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.510882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.510913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.511211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.511242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.511514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.511545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.511757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.511788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.512007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.512037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.512276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.512287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.512512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.512523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.512677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.512689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.512924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.512956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.513066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.513096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.513438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.513470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.513713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.513745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.514059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.514089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.514231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.514262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.514462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.514497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.514625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.514665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.514893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.514904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.515101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.515131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.515325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.515365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.515554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.515566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.515703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.515715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.516016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.516057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.516255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.516286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.516506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.516539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.516799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.516829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.709 [2024-07-16 01:32:41.517074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.709 [2024-07-16 01:32:41.517105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.709 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.517332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.517375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.517603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.517634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.517880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.517911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.518036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.518048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.518224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.518256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.518517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.518549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.518831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.518861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.519125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.519157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.519468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.519503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.519711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.519743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.519959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.519990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.520258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.520289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.520597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.520630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.520834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.520865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.521131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.521162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.521380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.521412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.521709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.521741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.521989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.522020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.522348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.522380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.522632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.522664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.522969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.523000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.523267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.523298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.523595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.523631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.523895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.523907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.524051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.524062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.524293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.524323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.524620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.524651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.524845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.524876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.525063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.525075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.525312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.525325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.525494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.525526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.525831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.525862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.526011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.526051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.526190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.526202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.526411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.526443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.526729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.710 [2024-07-16 01:32:41.526767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.710 qpair failed and we were unable to recover it. 00:27:15.710 [2024-07-16 01:32:41.526983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.527014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.527231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.527263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.527445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.527481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.527661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.527692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.527911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.527943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.528216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.528247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.528545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.528577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.528851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.528883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.529164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.529195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.529381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.529412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.529680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.529711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.529892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.529904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.530143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.530173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.530307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.530349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.530559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.530591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.530786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.530818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.531003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.531034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.531173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.531204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.531508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.531545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.531802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.531833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.532086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.532117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.532418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.532450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.532721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.532753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.532879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.532908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.533096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.533107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.533357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.533389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.533660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.533732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.533977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.533995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.534234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.534251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.534542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.534576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.534868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.534899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.535114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.535144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.535334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.535391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.535581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.535612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.535736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.535767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.711 qpair failed and we were unable to recover it. 00:27:15.711 [2024-07-16 01:32:41.536040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.711 [2024-07-16 01:32:41.536071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.536377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.536410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.536676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.536708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.536988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.537000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.537185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.537222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.537443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.537475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.537608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.537644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.537870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.537883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.538037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.538049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.538210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.538241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.538430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.538462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.538720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.538751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.539026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.539057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.539257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.539288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.539429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.539464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.539738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.539770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.539995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.540007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.540081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.540092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.540166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.540177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.540407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.540439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.540707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.540738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.540988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.541019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.541269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.541300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.541555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.541587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.541889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.541920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.542187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.542218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.542510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.542542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.542725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.542763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.542993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.543004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.543176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.543187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.543367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.543406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.543723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.543796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.544053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.544072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.544334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.544363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.544593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.544611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.544829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.712 [2024-07-16 01:32:41.544846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.712 qpair failed and we were unable to recover it. 00:27:15.712 [2024-07-16 01:32:41.545108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.545125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.545371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.545389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.545679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.545710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.545899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.545930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.546067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.546098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.546356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.546388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.546637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.546669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.546847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.546878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.547126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.547156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.547410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.547442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.547717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.547748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.547954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.547984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.548186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.548203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.548361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.548394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.548671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.548704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.548860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.548876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.549116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.549132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.549349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.549367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.549583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.549600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.549895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.549927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.550151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.550181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.550450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.550483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.550683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.550719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.551015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.551046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.551323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.551364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.551615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.551646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.551842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.551873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.552072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.552111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.713 [2024-07-16 01:32:41.552270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.713 [2024-07-16 01:32:41.552288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.713 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.552488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.552506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.552691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.552710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.552943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.552974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.553153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.553184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.553434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.553466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.553739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.553770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.554051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.554082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.554354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.554388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.554607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.554638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.554870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.554902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.555176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.555208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.555335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.555378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.555571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.555602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.555859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.555890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.556088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.556119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.556396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.556429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.556707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.556738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.557023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.557054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.557316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.557358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.557563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.557594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.557706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.557726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.557937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.557968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.558161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.558192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.558492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.558525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.558741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.558773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.559048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.559079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.559351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.559384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.559680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.559711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.559982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.560013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.560270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.560287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.560539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.560556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.560678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.560695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.560936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.560967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.561240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.561271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.561484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.714 [2024-07-16 01:32:41.561518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.714 qpair failed and we were unable to recover it. 00:27:15.714 [2024-07-16 01:32:41.561816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.561847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.562072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.562104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.562311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.562352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.562627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.562658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.562880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.562897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.563113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.563131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.563319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.563341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.563560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.563577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.563735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.563751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.563936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.563967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.564241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.564272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.564570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.564603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.564871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.564891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.565116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.565133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.565316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.565334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.565524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.565541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.565714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.565745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.566018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.566049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.566329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.566369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.566648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.566688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.566801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.566818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.567062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.567094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.567221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.567252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.567527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.567559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.567824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.567857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.568079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.568112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.568415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.568448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.568726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.568758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.568966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.568999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.569196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.569227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.569494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.569526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.569800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.569831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.570048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.570065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.570308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.570325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.715 qpair failed and we were unable to recover it. 00:27:15.715 [2024-07-16 01:32:41.570517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.715 [2024-07-16 01:32:41.570535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.570721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.570738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.570992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.571008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.571235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.571252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.571471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.571488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.571582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.571598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.571821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.571853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.572109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.572140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.572417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.572450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.572727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.572758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.572991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.573022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.573304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.573357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.573535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.573553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.573798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.573830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.574082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.574114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.574318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.574357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.574604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.574636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.574814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.574845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.575071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.575088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.575273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.575291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.575574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.575592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.575832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.575849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.576067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.576095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.576259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.576276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.576390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.576406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.576624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.576641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.576913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.576931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.577167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.577184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.577350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.577368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.577524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.577555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.577787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.577817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.578031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.578062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.578347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.578380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.578604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.578636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.578902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.578919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.579003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.579020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.579263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.716 [2024-07-16 01:32:41.579280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.716 qpair failed and we were unable to recover it. 00:27:15.716 [2024-07-16 01:32:41.579383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.579399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.579650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.579667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.579906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.579938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.580221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.580252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.580392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.580425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.580633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.580666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.580884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.580915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.581110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.581128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.581353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.581370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.581532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.581552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.581762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.581794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.582072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.582103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.582382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.582415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.582643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.582676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.582955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.582987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.583246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.583278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.583584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.583616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.583812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.583843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.584096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.584126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.584376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.584410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.584594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.584625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.584806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.584823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.585004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.585021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.585187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.585220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.585471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.585504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.585710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.585742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.585956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.585988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.586260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.586291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.586573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.586605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.586864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.586896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.587145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.587177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.587383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.587415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.587695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.587734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.588011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.588042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.588250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.588281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.588479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.588512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.588713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.717 [2024-07-16 01:32:41.588750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.717 qpair failed and we were unable to recover it. 00:27:15.717 [2024-07-16 01:32:41.589025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.589056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.589262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.589293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.589583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.589616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.589897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.589929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.590213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.590244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.590457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.590490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.590691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.590723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.591021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.591060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.591234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.591252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.591431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.591463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.591649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.591680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.591932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.591963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.592159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.592190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.592477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.592511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.592717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.592750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.593023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.593040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.593293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.593331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.593611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.593654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.593836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.593853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.594044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.594061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.594301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.594332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.594621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.594653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.594850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.594881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.595076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.595108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.595357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.595375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.595549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.595566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.595798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.595828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.596134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.596166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.596435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.596453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.718 [2024-07-16 01:32:41.596681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.718 [2024-07-16 01:32:41.596698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.718 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.596808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.596826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.597010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.597042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.597322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.597362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.597646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.597679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.597961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.597993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.598196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.598229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.598532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.598565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.598719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.598751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.598955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.598987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.599249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.599266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.599447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.599467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.599669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.599704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.599983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.600016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.600219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.600252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.600446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.600480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.600684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.600716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.600923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.600954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.601225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.601257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.601523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.601557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.601688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.601720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.601916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.601947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.602237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.602268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.602476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.602508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.602763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.602799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.602996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.603028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.603301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.603318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.603553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.603572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.603859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.603891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.604098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.604129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.604334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.604380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.604579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.604610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.604818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.604849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.605123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.605140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.605331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.605357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.605580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.605597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.605721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.605739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.719 [2024-07-16 01:32:41.605903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.719 [2024-07-16 01:32:41.605921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.719 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.606030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.606050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.606318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.606362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.606587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.606618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.606878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.606911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.607192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.607224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.607425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.607460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.607747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.607780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.607936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.607967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.608096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.608111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.608366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.608384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.608627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.608664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.608872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.608903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.609159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.609192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.609502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.609535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.609738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.609770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.609968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.609986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.610156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.610173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.610345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.610378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.610705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.610737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.611024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.611042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.611275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.611292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.611536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.611555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.611724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.611756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.612063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.612096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.612383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.612416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.612701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.612732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.612876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.612907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.613095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.613116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.613279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.613310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.613638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.613671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.613890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.613922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.614225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.614257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.614463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.614496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.614724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.614757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.614985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.615016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.615181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.615212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.720 [2024-07-16 01:32:41.615406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.720 [2024-07-16 01:32:41.615424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.720 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.615587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.615605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.615878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.615910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.616050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.616081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.616283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.616315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.616531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.616564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.616812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.616845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.617148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.617166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.617334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.617359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.617610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.617628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.617794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.617812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.618004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.618036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.618312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.618353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.618567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.618598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.618886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.618917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.619204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.619234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.619366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.619398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.619631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.619662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.619861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.619899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.620111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.620142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.620359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.620391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.620672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.620704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.620933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.620966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.621154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.621186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.621402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.621435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.621717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.621749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.621942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.621964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.622215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.622233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.622396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.622414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.622537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.622569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.622756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.622788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.623048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.623080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.623377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.623411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.623622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.623654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.623939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.623971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.624276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.624308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.624466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.721 [2024-07-16 01:32:41.624499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.721 qpair failed and we were unable to recover it. 00:27:15.721 [2024-07-16 01:32:41.624771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.624803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.625011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.625043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.625332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.625373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.625576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.625608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.625757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.625789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.626079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.626117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.626322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.626381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.626660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.626691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.626887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.626918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.627178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.627195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.627427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.627460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.627779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.627812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.628098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.628129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.628408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.628441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.628705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.628736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.629050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.629082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.629267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.629298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.629544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.629576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.629771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.629803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.629950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.629981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.630257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.630300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.630463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.630480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.630714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.630747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.630935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.630966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.631247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.631278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.631481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.631516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.631775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.631821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.632019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.632038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.632263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.632281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.632484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.632502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.632698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.632729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.632892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.632923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.633132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.633163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.633368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.633385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.633475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.633491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.633583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.633598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.633856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.722 [2024-07-16 01:32:41.633878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.722 qpair failed and we were unable to recover it. 00:27:15.722 [2024-07-16 01:32:41.634002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.634016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.634253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.634267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.634422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.634437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.634603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.634619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.634705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.634719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.634890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.634905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.635087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.635102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.635274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.635297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.635502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.635518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.635727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.635742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.635916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.635932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.636117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.636147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.636373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.636410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.636666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.636696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.636997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.637027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.637257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.637294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.637527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.637543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.637763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.637780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.638016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.638047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.638312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.638349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.638561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.638591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.638773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.638802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.639058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.639073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.639257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.639273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.639446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.639463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.639619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.639639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.639758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.639775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.639960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.639975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.640148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.640163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.723 [2024-07-16 01:32:41.640331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.723 [2024-07-16 01:32:41.640355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.723 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.640543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.640559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.640774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.640790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.640885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.640900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.641179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.641195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.641362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.641378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.641559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.641575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.641752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.641768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.641947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.641962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.642203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.642232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.642535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.642576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.642727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.642757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.642980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.643009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.643206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.643236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.643499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.643515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.643683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.643699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.643963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.643992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.644275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.644304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.644617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.644634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:15.724 [2024-07-16 01:32:41.644901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.724 [2024-07-16 01:32:41.644916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:15.724 qpair failed and we were unable to recover it. 00:27:16.005 [2024-07-16 01:32:41.645075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.005 [2024-07-16 01:32:41.645099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.005 qpair failed and we were unable to recover it. 00:27:16.005 [2024-07-16 01:32:41.645275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.005 [2024-07-16 01:32:41.645290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.645480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.645496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.645673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.645688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.645917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.645932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.646109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.646124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.646318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.646333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.646563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.646578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.646758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.646773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.646934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.646949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.647141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.647156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.647352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.647368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.647493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.647508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.647666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.647681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.647849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.647864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.648117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.648132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.648353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.648369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.648546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.648562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.648749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.648765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.648993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.649009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.649104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.649120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.649356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.649372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.649534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.649549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.649731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.649746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.649896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.649912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.650165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.650180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.650389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.650405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.650572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.650587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.650833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.650862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.651162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.651191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.651468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.651498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.651701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.651732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.652011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.652041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.652322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.652363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.652621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.652650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.652868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.652901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.653152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.653181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.653480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.653496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.653658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.653673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.653769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.653783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.653895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.653910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.654160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.006 [2024-07-16 01:32:41.654189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.006 qpair failed and we were unable to recover it. 00:27:16.006 [2024-07-16 01:32:41.654422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.654453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.654649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.654679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.654938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.654967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.655231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.655261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.655519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.655550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.655754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.655784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.656060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.656088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.656357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.656387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.656596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.656626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.656889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.656918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.657182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.657211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.657469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.657500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.657713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.657742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.658013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.658042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.658245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.658274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.658493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.658509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.658705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.658724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.658912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.658948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.659071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.659101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.659301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.659330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.659530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.659561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.659812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.659842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.660040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.660055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.660335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.660357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.660516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.660531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.660793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.660809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.660971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.660986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.661235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.661251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.661444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.661461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.661718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.661734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.661907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.661923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.662151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.662166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.662416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.662446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.662571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.662601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.662805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.662834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.663047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.663062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.663231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.663246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.663418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.007 [2024-07-16 01:32:41.663450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.007 qpair failed and we were unable to recover it. 00:27:16.007 [2024-07-16 01:32:41.663644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.663674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.663890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.663919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.664136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.664151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.664384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.664400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.664620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.664635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.664814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.664833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.665019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.665034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.665254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.665284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.665491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.665523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.665808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.665837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.666043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.666072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.666383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.666414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.666710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.666739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.666944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.666974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.667171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.667200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.667467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.667498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.667720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.667750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.668036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.668066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.668360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.668385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.668569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.668585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.668750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.668766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.668961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.668977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.669153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.669187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.669384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.669416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.669602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.669633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.669819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.669849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.669976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.670006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.670212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.670242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.670441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.670457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.670648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.670663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.670909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.670924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.671194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.671210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.671403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.671423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.671698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.671713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.671862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.671878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.672051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.672066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.672234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.672249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.672422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.672438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.672603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.672619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.672810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.672825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.008 [2024-07-16 01:32:41.673085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.008 [2024-07-16 01:32:41.673115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.008 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.673321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.673359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.673616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.673631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.673866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.673881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.674150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.674165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.674350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.674366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.674678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.674726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.675001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.675019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.675245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.675261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.675432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.675451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.675653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.675669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.675894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.675926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.676122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.676152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.676281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.676311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.676566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.676604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.676715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.676728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.676827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.676838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.676912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.676921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.677155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.677166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.677253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.677267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.677330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.677351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.677568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.677580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.677739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.677750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.677963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.677974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.678116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.678126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.678221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.678230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.678333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.678350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.678497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.678508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.678619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.678629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.678709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.678718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.678793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.678802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.678944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.678955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.679054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.679064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.679244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.679255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.679502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.679513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.679602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.679612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.679687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.679696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.679844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.679855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.679942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.679951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.680048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.680058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.009 [2024-07-16 01:32:41.680153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.009 [2024-07-16 01:32:41.680163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.009 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.680343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.680354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.680540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.680571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.680847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.680876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.681004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.681034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.681175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.681205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.681449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.681495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.681785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.681803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.682030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.682045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.682154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.682184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.682409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.682441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.682628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.682659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.682901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.682930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.683127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.683159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.683392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.683410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.683508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.683524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.683706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.683738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.683878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.683910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.684147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.684180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.684393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.684435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.684659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.684690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.684918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.684949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.685142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.685159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.685328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.685377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.685580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.685611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.685808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.685839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.685983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.686015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.686225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.686256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.686519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.686552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.686689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.686721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.686981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.687012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.687274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.687305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.687462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.687495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.687814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.010 [2024-07-16 01:32:41.687846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.010 qpair failed and we were unable to recover it. 00:27:16.010 [2024-07-16 01:32:41.688039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.688070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.688196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.688235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.688327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.688349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.688524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.688555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.688703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.688734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.688958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.688988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.689189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.689206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.689318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.689333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.689512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.689530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.689631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.689663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.689784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.689815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.689955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.689987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.690238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.690314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.690634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.690711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.691012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.691053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.691234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.691246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.691365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.691396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.691535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.691567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.691697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.691729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.691931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.691963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.692230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.692262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.692466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.692478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.692656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.692688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.692876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.692908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.693128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.693160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.693302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.693318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.693401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.693414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.693569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.693582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.693739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.693771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.693905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.693937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.694070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.694102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.694378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.694412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.694676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.694708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.694968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.695000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.695212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.695224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.695382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.695415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.695693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.695725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.695935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.695966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.696105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.696118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.696197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.696209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.696467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.011 [2024-07-16 01:32:41.696499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.011 qpair failed and we were unable to recover it. 00:27:16.011 [2024-07-16 01:32:41.696683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.696715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.696822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.696854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.697053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.697085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.697224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.697256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.697451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.697466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.697608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.697622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.697727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.697738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.697984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.698017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.698148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.698179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.698386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.698419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.698623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.698654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.698921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.698998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.699131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.699151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.699263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.699280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.699504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.699523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.699617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.699650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.699839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.699872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.700076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.700107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.700292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.700309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.700480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.700498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.700680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.700712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.700979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.701011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.701211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.701243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.701362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.701380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.701577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.701600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.701697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.701713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.701813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.701830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.701917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.701933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.702013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.702029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.702130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.702148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.702304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.702321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.702428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.702444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.702635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.702652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.702740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.702755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.702851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.702866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.702968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.702984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.703097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.703114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.703205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.703221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.703390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.703419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.703530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.703548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.703775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.012 [2024-07-16 01:32:41.703792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.012 qpair failed and we were unable to recover it. 00:27:16.012 [2024-07-16 01:32:41.703968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.704003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.704137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.704168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.704305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.704348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.704549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.704581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.704795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.704825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.704960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.704991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.705194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.705225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.705483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.705524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.705774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.705791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.705959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.705976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.706089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.706107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.706272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.706290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.706390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.706407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.706514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.706532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.706629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.706646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.706806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.706823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.706985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.707017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.707202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.707233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.707500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.707532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.707725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.707757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.707911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.707954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.708230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.708261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.708419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.708451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.708640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.708672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.708870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.708902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.709085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.709116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.709308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.709349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.709495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.709512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.709608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.709623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.709773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.709790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.709947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.709964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.710059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.710074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.710240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.710257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.710411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.710443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.710567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.710599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.710785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.710815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.711036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.711068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.711257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.711274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.711447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.711464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.711546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.013 [2024-07-16 01:32:41.711561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.013 qpair failed and we were unable to recover it. 00:27:16.013 [2024-07-16 01:32:41.711787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.711804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.711909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.711926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.712022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.712039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.712205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.712222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.712449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.712481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.712679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.712710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.712902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.712935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.713066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.713097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.713244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.713261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.713354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.713370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.713600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.713621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.713851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.713886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.714049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.714067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.714356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.714374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.714615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.714650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.714791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.714822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.715049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.715080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.715212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.715243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.715387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.715405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.715554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.715571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.715794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.715810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.715986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.716018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.716196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.716227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.716432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.716464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.716690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.716707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.716805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.716836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.716972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.717003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.717186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.717218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.717404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.717421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.717646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.717677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.717908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.717939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.718130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.718161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.718407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.718424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.718509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.718524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.718679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.718695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.014 qpair failed and we were unable to recover it. 00:27:16.014 [2024-07-16 01:32:41.718777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.014 [2024-07-16 01:32:41.718794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.718915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.718931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.719150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.719167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.719404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.719437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.719629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.719660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.719782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.719813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.720003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.720034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.720175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.720206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.720351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.720383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.720656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.720672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.720868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.720885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.720964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.720980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.721138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.721155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.721302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.721319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.721598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.721631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.721811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.721847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.722035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.722065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.722187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.722202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.722285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.722302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.722392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.722407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.722566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.722597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.722782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.722813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.723085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.723130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.723350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.723368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.723489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.723506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.723729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.723745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.723901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.723918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.724045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.724077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.724257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.724288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.724520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.724553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.724737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.724770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.724962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.724993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.725190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.725222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.725429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.725462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.725583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.725613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.725734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.725765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.725902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.725933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.726221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.726238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.726465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.726482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.726588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.726604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.726689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.726704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.015 qpair failed and we were unable to recover it. 00:27:16.015 [2024-07-16 01:32:41.726898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.015 [2024-07-16 01:32:41.726915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.727049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.727080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.727207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.727237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.727458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.727490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.727670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.727701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.727827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.727859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.728002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.728033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.728299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.728330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.728512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.728529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.728722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.728753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.729040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.729071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.729207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.729237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.729439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.729456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.729636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.729666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.729864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.729900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.730035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.730065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.730314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.730352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.730478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.730509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.730719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.730736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.730847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.730885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.731138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.731169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.731294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.731324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.731520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.731538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.731632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.731647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.731728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.731744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.731822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.731836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.731926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.731940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.732089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.732105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.732327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.732369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.732487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.732519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.732723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.732755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.732888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.732918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.733035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.733066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.733251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.733267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.733428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.733460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.733651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.733683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.733805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.733836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.734020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.734051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.734173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.734190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.734395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.016 [2024-07-16 01:32:41.734428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.016 qpair failed and we were unable to recover it. 00:27:16.016 [2024-07-16 01:32:41.734612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.734643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.734825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.734857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.735049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.735066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.735174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.735191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.735301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.735317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.735497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.735529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.735738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.735769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.735959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.735990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.736112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.736152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.736242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.736258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.736499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.736515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.736743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.736759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.737031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.737062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.737281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.737313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.737442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.737484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.737582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.737597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.737761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.737778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.737888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.737904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.738142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.738173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.738386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.738418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.738528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.738559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.738832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.738862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.738987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.739033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.739195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.739212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.739370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.739387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.739612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.739643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.739765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.739795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.740051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.740082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.740365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.740397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.740580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.740597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.740694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.740711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.740805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.740821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.740998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.741029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.741210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.741241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.741423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.741455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.741652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.741684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.741820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.741850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.741980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.742010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.742260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.742290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.742423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.742455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.017 [2024-07-16 01:32:41.742579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.017 [2024-07-16 01:32:41.742596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.017 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.742778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.742810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.743031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.743061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.743194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.743210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.743449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.743482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.743688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.743719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.743855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.743886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.744073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.744103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.744223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.744238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.744401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.744418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.744603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.744633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.744846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.744877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.745017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.745047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.745189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.745219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.745428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.745465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.745603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.745633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.745883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.745913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.746141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.746171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.746353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.746385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.746577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.746608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.746736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.746767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.746893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.746924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.747055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.747086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.747315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.747355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.747626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.747657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.747900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.747931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.748136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.748166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.748298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.748315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.748488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.748505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.748612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.748628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.748811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.748828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.748936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.748966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.749068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.749099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.749205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.749235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.749359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.749391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.018 [2024-07-16 01:32:41.749524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.018 [2024-07-16 01:32:41.749541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.018 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.749739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.749769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.750030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.750062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.750196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.750235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.750424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.750441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.750623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.750640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.750857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.750873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.751046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.751062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.751169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.751185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.751327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.751348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.751533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.751550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.751641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.751657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.751753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.751768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.751942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.751973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.752159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.752190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.752376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.752419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.752575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.752592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.752823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.752840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.753013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.753029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.753129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.753148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.753237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.753252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.753352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.753369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.753584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.753615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.753729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.753759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.753884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.753915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.754162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.754193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.754462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.754493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.754607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.754623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.754815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.754831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.754916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.754931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.755048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.755080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.755267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.755298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.755489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.755521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.755742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.755759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.755911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.755942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.756137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.756167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.756373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.756404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.756608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.756638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.756759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.756790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.019 qpair failed and we were unable to recover it. 00:27:16.019 [2024-07-16 01:32:41.756986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.019 [2024-07-16 01:32:41.757016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.757221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.757252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.757516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.757533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.757624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.757639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.757793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.757809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.757993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.758010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.758191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.758207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.758312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.758327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.758568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.758585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.758753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.758783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.758965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.758996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.759189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.759219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.759443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.759475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.759608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.759624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.759790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.759821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.760103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.760134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.760263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.760294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.760486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.760502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.760647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.760682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.760825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.760856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.761098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.761133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.761357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.761374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.761467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.761498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.761685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.761715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.761924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.761954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.762247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.762277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.762476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.762508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.762741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.762758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.762949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.762964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.763124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.763140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.763315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.763331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.763504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.763534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.763663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.763693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.763898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.763929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.764074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.764119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.764267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.764283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.764517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.764534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.764637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.764654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.764760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.764775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.764922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.020 [2024-07-16 01:32:41.764938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.020 qpair failed and we were unable to recover it. 00:27:16.020 [2024-07-16 01:32:41.765152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.765168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.765261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.765276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.765378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.765409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.765548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.765579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.765769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.765800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.765913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.765944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.766213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.766244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.766507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.766578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.766750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.766819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.767050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.767086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.767210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.767241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.767485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.767521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.767743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.767775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.767889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.767920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.768148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.768179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.768392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.768424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.768549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.768580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.768772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.768803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.768990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.769022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.769132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.769142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.769276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.769293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.769442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.769454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.769597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.769609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.769761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.769772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.769911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.769942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.770067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.770098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.770220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.770250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.770370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.770403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.770601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.770613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.770677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.770687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.770782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.770793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.770946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.770957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.771033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.771043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.771206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.771237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.771360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.771402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.771519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.771550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.771742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.771773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.771998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.772028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.772161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.772191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.772332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.772375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.021 qpair failed and we were unable to recover it. 00:27:16.021 [2024-07-16 01:32:41.772504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.021 [2024-07-16 01:32:41.772535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.772739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.772751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.772952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.772982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.773243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.773274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.773460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.773491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.773619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.773649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.773780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.773810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.774002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.774042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.774246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.774263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.774431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.774463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.774585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.774616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.774801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.774832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.775058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.775089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.775302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.775319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.775490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.775507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.775675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.775706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.775840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.775870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.775982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.776012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.776132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.776162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.776294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.776311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.776495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.776536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.776667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.776698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.776960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.776992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.777180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.777210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.777404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.777436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.777622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.777653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.777768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.777805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.778000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.778030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.778231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.778263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.778476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.778516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.778710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.778742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.778993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.779023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.779211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.779242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.779433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.779465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.779711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.779742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.779855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.779886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.780055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.780085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.780329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.780374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.780570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.780601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.780792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.022 [2024-07-16 01:32:41.780823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.022 qpair failed and we were unable to recover it. 00:27:16.022 [2024-07-16 01:32:41.780956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.780987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.781163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.781194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.781367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.781399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.781508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.781539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.781666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.781697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.781817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.781847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.782021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.782051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.782290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.782370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.782511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.782547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.782750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.782781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.782956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.782985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.783110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.783141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.783355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.783386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.783513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.783544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.783734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.783765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.783962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.783992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.784204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.784234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.784479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.784511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.784627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.784643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.784722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.784738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.784834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.784856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.784954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.784968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.785182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.785213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.785325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.785365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.785500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.785532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.785730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.785741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.785810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.785820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.786045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.786075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.786248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.786278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.786479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.786511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.786625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.786655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.786850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.786880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.787006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.023 [2024-07-16 01:32:41.787038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.023 qpair failed and we were unable to recover it. 00:27:16.023 [2024-07-16 01:32:41.787157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.787187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.787381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.787417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.787528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.787540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.787723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.787755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.787879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.787911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.788020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.788050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.788289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.788320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.788458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.788490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.788695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.788727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.788919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.788949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.789062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.789092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.789335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.789376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.789562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.789593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.789791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.789802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.789912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.789942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.790125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.790156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.790330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.790369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.790499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.790535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.790677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.790688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.790773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.790783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.790858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.790868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.790948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.790957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.791160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.791171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.791239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.791249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.791395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.791430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.791550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.791580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.791708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.791739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.791935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.791971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.792080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.792110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.792291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.792322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.792516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.792547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.792745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.792776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.792908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.792939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.793069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.793100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.793349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.793380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.793624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.793655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.793781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.793813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.794025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.794056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.794312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.794350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.794562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.794593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.024 [2024-07-16 01:32:41.794786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.024 [2024-07-16 01:32:41.794817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.024 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.794944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.794976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.795242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.795274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.795545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.795580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.795851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.795862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.796053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.796083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.796276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.796306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.796509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.796546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.796693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.796704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.796776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.796787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.796990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.797021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.797200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.797231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.797356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.797367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.797503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.797515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.797754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.797772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.797925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.797941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.798124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.798154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.798368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.798398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.798610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.798640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.798761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.798790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.798917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.798947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.799074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.799105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.799231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.799261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.799442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.799473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.799659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.799689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.799868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.799899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.800022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.800052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.800194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.800229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.800371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.800403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.800588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.800605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.800753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.800783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.800976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.801008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.801184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.801214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.801352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.801369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.801453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.801468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.801552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.801565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.025 [2024-07-16 01:32:41.801702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.025 [2024-07-16 01:32:41.801714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.025 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.801863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.801874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.802075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.802086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.802171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.802181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.802388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.802419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.802554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.802585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.802806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.802837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.802974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.803004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.803200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.803230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.803424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.803459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.803660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.803692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.803874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.803905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.804083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.804115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.804377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.804410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.804667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.804699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.804920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.804950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.805159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.805190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.805452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.805496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.805638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.805651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.805806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.805818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.805974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.806005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.806254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.806285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.806398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.806430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.806619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.806630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.806771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.806802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.806981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.807013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.807202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.807232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.807440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.807453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.807603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.807633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.807890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.807920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.808029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.808059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.808196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.808226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.808362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.808394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.808514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.808525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.808661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.808672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.808824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.808854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.809030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.809060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.809183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.809213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.809358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.809369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.026 [2024-07-16 01:32:41.809566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.026 [2024-07-16 01:32:41.809578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.026 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.809669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.809679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.809856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.809885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.810001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.810031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.810254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.810284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.810417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.810448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.810698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.810729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.810909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.810939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.811155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.811185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.811314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.811362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.811553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.811584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.811710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.811740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.811964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.811994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.812164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.812194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.812375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.812387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.812539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.812569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.812829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.812859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.813033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.813064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.813258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.813289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.813548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.813586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.813703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.813714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.813918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.813950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.814194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.814225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.814414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.814453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.814626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.814637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.814731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.814762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.814980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.815011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.815227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.815258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.815403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.815415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.815656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.815687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.815879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.815910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.816056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.816088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.816223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.816253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.027 [2024-07-16 01:32:41.816435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.027 [2024-07-16 01:32:41.816468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.027 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.816601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.816631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.816751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.816781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.816959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.816990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.817125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.817156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.817331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.817373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.817479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.817508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.817707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.817717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.817858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.817869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.817950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.817959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.818061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.818091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.818283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.818314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.818565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.818625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.818800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.818819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.818931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.818962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.819247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.819289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.819374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.819392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.819489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.819504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.819680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.819710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.819888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.819918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.820088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.820118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.820302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.820332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.820447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.820463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.820552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.820567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.820717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.820754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.820937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.820968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.821238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.821275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.821371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.821381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.821468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.821477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.821651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.821682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.821878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.821908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.822044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.822075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.822323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.822362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.822485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.822519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.822656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.822667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.822808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.028 [2024-07-16 01:32:41.822838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.028 qpair failed and we were unable to recover it. 00:27:16.028 [2024-07-16 01:32:41.823024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.823055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.823187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.823217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.823477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.823491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.823647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.823677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.823887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.823918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.824158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.824188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.824315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.824370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.824546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.824576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.824767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.824797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.825001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.825032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.825147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.825177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.825366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.825377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.825531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.825542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.825685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.825696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.825763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.825773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.825858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.825868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.825948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.825958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.826189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.826220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.826354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.826385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.826500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.826531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.826626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.826637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.826860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.826890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.827070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.827100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.827376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.827416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.827610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.827621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.827779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.827790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.827868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.827911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.828037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.828067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.828197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.828228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.828421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.828453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.828610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.828623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.828765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.828795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.829037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.829067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.829193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.829223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.829360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.829392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.829573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.829604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.829870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.829902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.830089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.830120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.830241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.830271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.830461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.830499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.029 [2024-07-16 01:32:41.830645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.029 [2024-07-16 01:32:41.830656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.029 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.830763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.830793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.830967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.830997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.831241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.831271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.831459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.831472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.831553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.831563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.831721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.831732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.831868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.831899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.832039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.832068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.832262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.832292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.832429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.832440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.832523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.832533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.832665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.832675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.832739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.832749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.832892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.832903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.832990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.832999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.833160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.833170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.833312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.833354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.833546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.833576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.833819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.833849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.834037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.834067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.834364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.834396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.834590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.834621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.834797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.834827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.835004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.835036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.835288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.835318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.835495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.835507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.835712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.835742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.836009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.836040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.836281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.836312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.836431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.836445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.836535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.836545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.836699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.836730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.836938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.836968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.837156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.837187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.837454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.837465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.030 [2024-07-16 01:32:41.837534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-07-16 01:32:41.837563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.030 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.837804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.837835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.838080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.838111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.838249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.838280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.838581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.838613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.838875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.838886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.839034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.839045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.839197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.839227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.839363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.839404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.839581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.839612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.839732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.839763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.839953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.839983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.840279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.840310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.840524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.840556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.840681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.840712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.840956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.840986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.841195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.841225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.841331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.841373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.841545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.841577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.841770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.841781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.841931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.841942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.842082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.842093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.842241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.842254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.842353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.842364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.842494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.842505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.842603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.842614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.842708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.842719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.842789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.842799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.842945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.842957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.843047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.843057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.843206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.843217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.843348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.843363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.843462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.843473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.843649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.843660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.843761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.843773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.843858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.843869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.843938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.843948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.844027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.844037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.844186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-07-16 01:32:41.844196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.031 qpair failed and we were unable to recover it. 00:27:16.031 [2024-07-16 01:32:41.844268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.844279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.844430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.844442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.844517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.844526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.844591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.844601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.844745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.844755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.844899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.844910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.844993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.845002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.845141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.845152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.845300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.845311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.845467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.845479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.845567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.845578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.845643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.845653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.845733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.845744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.845893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.845904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.846109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.846120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.846202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.846212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.846416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.846427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.846497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.846507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.846723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.846734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.846824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.846834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.846998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.847009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.847146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.847157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.847294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.847306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.847401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.847413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.847480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.847491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.847553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.847563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.847626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.847636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.847723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.847732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.847829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.847838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.847998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.848009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.848150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.848162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.848246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.848256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.848400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.848411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.032 [2024-07-16 01:32:41.848636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-07-16 01:32:41.848647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.032 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.848780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.848792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.848866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.848878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.848944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.848954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.849045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.849056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.849253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.849271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.849355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.849365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.849445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.849456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.849538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.849549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.849637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.849649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.849731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.849741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.849887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.849898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.849965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.849975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.850127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.850138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.850358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.850370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.850453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.850465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.850538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.850549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.850693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.850704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.850775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.850786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.850993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.851005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.851086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.851097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.851162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.851172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.851246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.851257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.851360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.851374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.851515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.851526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.851651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.851663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.851755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.851765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.851901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.851912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.852063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.852074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.852167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.852178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.852264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.852275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.852434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.852445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.852578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.852589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.852809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.852821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.852951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.852962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.853051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.853062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.853257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.853268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.853403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.853414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.853548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.853559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.853703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.853714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.033 qpair failed and we were unable to recover it. 00:27:16.033 [2024-07-16 01:32:41.853798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.033 [2024-07-16 01:32:41.853808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.853903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.853915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.854082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.854095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.854231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.854242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.854313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.854323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.854459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.854470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.854603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.854627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.854748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.854779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.855050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.855081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.855265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.855295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.855460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.855495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.855632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.855663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.855797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.855827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.856015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.856046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.856303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.856333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.856587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.856598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.856753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.856764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.856843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.856875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.857078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.857109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.857296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.857326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.857534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.857545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.857639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.857649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.857847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.857858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.857990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.858001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.858082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.858091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.858227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.858239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.858408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.858419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.858504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.858514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.858721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.858752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.859011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.859083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.859223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.859262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.859395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.859413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.859566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.859582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.859735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.859765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.859891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.859921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.034 [2024-07-16 01:32:41.860041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.034 [2024-07-16 01:32:41.860071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.034 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.860183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.860213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.860405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.860437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.860636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.860667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.860858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.860888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.861012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.861042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.861226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.861257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.861437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.861468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.861612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.861642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.861766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.861797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.861959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.861975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.862068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.862107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.862315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.862356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.862493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.862524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.862711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.862727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.862885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.862916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.863028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.863058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.863243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.863274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.863415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.863446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.863562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.863592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.863800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.863830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.863947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.863984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.864165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.864196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.864396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.864428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.864700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.864711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.864853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.864864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.864927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.864937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.865036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.865046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.865219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.865250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.865391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.865423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.865602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.865632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.865881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.865910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.866093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.866124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.866267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.866298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.866519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.866550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.866692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.866723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.866860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.866890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.867164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.867194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.867382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.867421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.867564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.867574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.867729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.867759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.035 [2024-07-16 01:32:41.867897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.035 [2024-07-16 01:32:41.867927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.035 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.868112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.868141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.868253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.868283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.868542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.868573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.868814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.868843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.869088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.869118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.869370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.869401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.869599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.869629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.869817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.869847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.869946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.869956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.870149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.870180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.870392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.870423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.870603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.870614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.870761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.870792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.870925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.870955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.871068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.871098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.871279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.871310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.871506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.871541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.871778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.871788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.871968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.871998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.872200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.872235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.872478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.872513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.872601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.872611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.872775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.872806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.872984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.873014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.873200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.873230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.873359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.873391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.873514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.873544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.873682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.873693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.873857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.873888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.874142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.874172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.874365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.874396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.874562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.874573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.874730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.874760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.874950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.874980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.875093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.036 [2024-07-16 01:32:41.875123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.036 qpair failed and we were unable to recover it. 00:27:16.036 [2024-07-16 01:32:41.875257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.875287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.875491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.875503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.875724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.875755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.876001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.876031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.876306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.876357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.876552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.876582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.876857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.876867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.877014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.877025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.877201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.877231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.877432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.877444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.877619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.877630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.877853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.877864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.877961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.877972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.878118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.878129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.878272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.878283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.878448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.878479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.878676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.878707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.878843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.878873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.879061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.879092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.879188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.879218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.879407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.879442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.879623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.879654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.879781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.879812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.879941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.879973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.880175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.880211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.880330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.880373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.880523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.880534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.880686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.880716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.880815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.880846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.881093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.881123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.881365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.881397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.881610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.881641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.881774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.881805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.881900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.881910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.882002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.882012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.882187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.882217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.882392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.882423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.882545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.882577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.882714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.882725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.882885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.037 [2024-07-16 01:32:41.882896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.037 qpair failed and we were unable to recover it. 00:27:16.037 [2024-07-16 01:32:41.882979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.882990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.883134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.883163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.883307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.883347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.883624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.883657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.883835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.883846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.883923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.883933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.884002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.884012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.884219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.884249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.884466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.884498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.884604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.884634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.884756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.884786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.884930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.884961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.885151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.885182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.885313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.885353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.885574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.885605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.885796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.885807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.885940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.885951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.886118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.886149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.886274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.886305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.886442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.886473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.886667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.886698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.886901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.886931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.887174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.887205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.887453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.887488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.887610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.887623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.887697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.887708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.887870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.887902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.888123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.888153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.888403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.888434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.888622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.888633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.888863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.888895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.889017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.889047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.889250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.889280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.889475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.889507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.889687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.889718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.889841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.889851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.889942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.889953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.890175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.890186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.890270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.890279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.890421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.890453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.890594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.038 [2024-07-16 01:32:41.890624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.038 qpair failed and we were unable to recover it. 00:27:16.038 [2024-07-16 01:32:41.890811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.890842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.891029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.891060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.891256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.891286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.891602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.891637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.891832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.891863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.891975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.891986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.892128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.892138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.892228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.892238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.892368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.892380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.892568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.892598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.892734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.892765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.892884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.892915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.893089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.893100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.893205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.893236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.893368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.893399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.893640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.893670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.893799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.893810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.893954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.893965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.894099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.894129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.894407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.894438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.894648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.894659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.894736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.894746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.894831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.894841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.894906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.894918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.895147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.895176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.895374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.895413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.895551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.895582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.895818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.895829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.896068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.896099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.896279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.896309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.896470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.896502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.896696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.896726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.896966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.896977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.897143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.897153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.897312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.897323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.897420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.897430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.897506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.897515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.897581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.897592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.897695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.897724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.897989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.898019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.898142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.039 [2024-07-16 01:32:41.898173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.039 qpair failed and we were unable to recover it. 00:27:16.039 [2024-07-16 01:32:41.898368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.898400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.898544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.898555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.898714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.898724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.898943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.898954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.899146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.899176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.899451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.899486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.899685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.899696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.899790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.899800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.900025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.900055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.900208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.900239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.900414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.900446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.900580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.900611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.900855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.900886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.901065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.901095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.901274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.901304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.901437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.901468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.901648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.901678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.901800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.901830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.901961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.901991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.902178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.902209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.902453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.902485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.902659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.902669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.902890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.902903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.902990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.903019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.903201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.903231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.903473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.903508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.903753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.903765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.903847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.903857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.904003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.904014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.904136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.904166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.904296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.904326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.904513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.904545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.904730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.904741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.904871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.904882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.905009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.905020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.905152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.905162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.905300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.905331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.905587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.905618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.905858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.905888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.906029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.906059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.040 [2024-07-16 01:32:41.906166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.040 [2024-07-16 01:32:41.906196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.040 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.906412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.906444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.906570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.906601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.906854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.906884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.907160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.907190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.907300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.907329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.907501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.907513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.907599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.907608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.907754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.907784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.907900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.907931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.908138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.908168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.908373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.908406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.908605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.908636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.908824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.908854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.909096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.909126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.909310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.909348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.909524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.909555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.909736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.909766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.909936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.909966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.910147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.910177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.910452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.910484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.910668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.910679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.910753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.910765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.910888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.910919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.911111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.911141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.911320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.911366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.911594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.911637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.911765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.911775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.911999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.912028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.912209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.912239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.912369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.912401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.912589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.912600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.912696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.912705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.912846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.912856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.912938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.912948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.041 qpair failed and we were unable to recover it. 00:27:16.041 [2024-07-16 01:32:41.913172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.041 [2024-07-16 01:32:41.913202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.913333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.913373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.913566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.913597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.913843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.913873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.913978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.914009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.914294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.914323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.914528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.914559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.914695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.914725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.914833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.914873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.914977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.914988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.915168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.915198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.915348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.915393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.915641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.915672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.915847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.915857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.915952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.915962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.916036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.916046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.916296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.916326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.916547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.916579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.916773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.916804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.916973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.916985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.917134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.917164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.917350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.917382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.917496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.917526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.917767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.917798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.918012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.918042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.918224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.918254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.918433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.918465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.918662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.918698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.918887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.918917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.919018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.919029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.919114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.919124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.919289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.919320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.919451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.919484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.919684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.919715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.919938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.919949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.920085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.920096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.920240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.920271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.920419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.920450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.920655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.920685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.920857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.920868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.042 qpair failed and we were unable to recover it. 00:27:16.042 [2024-07-16 01:32:41.921067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.042 [2024-07-16 01:32:41.921078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.921156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.921166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.921299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.921310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.921376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.921386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.921460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.921471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.921618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.921628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.921842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.921853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.921992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.922003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.922077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.922087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.922164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.922174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.922241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.922250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.922424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.922455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.922646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.922676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.922782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.922812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.922951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.922981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.923091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.923120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.923224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.923254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.923459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.923490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.923664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.923675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.923825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.923855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.923980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.924010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.924325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.924378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.924572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.924603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.924846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.924877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.925080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.925110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.925285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.925315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.925525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.925560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.925663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.925700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.925836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.925871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.925958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.925968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.926099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.926110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.926248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.926280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.926488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.926520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.926699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.926729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.926849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.926879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.927063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.927073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.927162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.927172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.927377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.927404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.927558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.927569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.927718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.927729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.043 qpair failed and we were unable to recover it. 00:27:16.043 [2024-07-16 01:32:41.927830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.043 [2024-07-16 01:32:41.927840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.927929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.927967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.928161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.928191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.928391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.928422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.928550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.928580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.928712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.928742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.928923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.928954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.929092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.929123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.929297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.929327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.929620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.929653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.929921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.929951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.930192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.930223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.930415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.930446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.930731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.930762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.931011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.931081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.931404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.931473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.931771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.931840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.932185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.932219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.932440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.932472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.932665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.932696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.932815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.932845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.933053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.933083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.933259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.933289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.933428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.933463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.933645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.933676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.933856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.933886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.933972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.933982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.934122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.934135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.934213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.934222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.934404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.934435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.934661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.934691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.934956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.934995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.935131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.935142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.935239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.935268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.935547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.935579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.935798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.935828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.936039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.936070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.936256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.936286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.936428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.936460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.936582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.936613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.044 qpair failed and we were unable to recover it. 00:27:16.044 [2024-07-16 01:32:41.936819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.044 [2024-07-16 01:32:41.936830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.937026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.937057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.937319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.937365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.937557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.937568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.937731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.937761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.937964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.937995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.938212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.938243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.938378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.938410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.938591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.938622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.938829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.938859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.938980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.939011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.939202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.939232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.939370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.939402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.939533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.939563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.939758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.939797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.939940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.939981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.940181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.940212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.940401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.940434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.940737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.940768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.941034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.941064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.941259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.941289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.941440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.941472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.941675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.941691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.941916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.941947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.942214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.942243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.942425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.942456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.942620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.942650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.942790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.942820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.943011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.943042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.943223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.943254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.943435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.943465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.943655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.943686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.943877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.943908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.944040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.944055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.944158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.944173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.045 [2024-07-16 01:32:41.944352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.045 [2024-07-16 01:32:41.944390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.045 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.944596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.944627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.944798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.944828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.945089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.945121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.945436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.945471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.945667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.945699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.945907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.945940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.946077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.946093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.946253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.946269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.946449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.946466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.946664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.946695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.946807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.946841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.946967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.946997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.947118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.947147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.947335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.947378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.947620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.947651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.947889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.947921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.948055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.948085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.948531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.948571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.948711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.948743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.948937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.948968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.949149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.949179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.949439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.949471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.949655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.949670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.949763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.949804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.949945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.949978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.950163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.950194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.950389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.950420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.950611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.950641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.950781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.950812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.951006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.951036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.951304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.951348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.951616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.951648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.951767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.951803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.952059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.952075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.952236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.952253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.952373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.952389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.952544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.952560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.952641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.952656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.952801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.952817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.046 [2024-07-16 01:32:41.952928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.046 [2024-07-16 01:32:41.952944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.046 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.953067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.953098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.953237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.953271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.953467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.953499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.953771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.953802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.953933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.953965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.954085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.954102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.954202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.954217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.954369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.954400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.954644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.954673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.954846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.954877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.954986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.955016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.955362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.955394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.955527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.955558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.955757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.955787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.955917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.955959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.956113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.956129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.956217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.956256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.956380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.956411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.956588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.956617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.956735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.956778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.956874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.956888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.956977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.956991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.957198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.957214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.957312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.957327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.957479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.957496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.957646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.957663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.957829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.957844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.957921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.957965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.958152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.958183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.958298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.958329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.958524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.958554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.958753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.958782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.958900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.958931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.959051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.959082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.959209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.959225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.959329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.959350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.959585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.959601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.959759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.959776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.959879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.959911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.960115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.960145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.960251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.960281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.047 qpair failed and we were unable to recover it. 00:27:16.047 [2024-07-16 01:32:41.960411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.047 [2024-07-16 01:32:41.960442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.960568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.960598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.960703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.960734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.960936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.960966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.961150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.961165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.961316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.961335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.961440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.961455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.961540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.961555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.961654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.961669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.961808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.961824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.961965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.961981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.962059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.962073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.962223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.962239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.962387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.962419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.962595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.962625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.962810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.962841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.962959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.962974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.963182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.963211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.963332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.963385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.963700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.963733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.963834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.963852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.964065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.964096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.964273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.964303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.964608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.964641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.964842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.964872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.964988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.965003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.965101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.965115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.965242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.965272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.965412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.965445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.965578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.965608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.965719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.965749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.965862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.965877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.965967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.965986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.966131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.966147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.966239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.966254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.966356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.966372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.966442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.966456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.966615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.966645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.966749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.966780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.966986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.967016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.967144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.967174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.967286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.967316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.967555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.967586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.967860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.048 [2024-07-16 01:32:41.967890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.048 qpair failed and we were unable to recover it. 00:27:16.048 [2024-07-16 01:32:41.968071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.968087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.968178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.968193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.968277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.968292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.968536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.968553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.968626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.968641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.968788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.968804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.968892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.968908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.969009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.969026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.969103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.969147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.969263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.969295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.969444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.969476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.969594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.969626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.969733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.969748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.049 [2024-07-16 01:32:41.969887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.049 [2024-07-16 01:32:41.969903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.049 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.969994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.970009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.970095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.970110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.970210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.970225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.970303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.970318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.970466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.970483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.970627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.970643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.970716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.970731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.970891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.970907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.970991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.971006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.971243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.971259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.971342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.971358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.971497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.971513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.971603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.971617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.971693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.971708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.971789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.971807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.971887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.971902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.971999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.972014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.972152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.972168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.972248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.972263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.972356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.972371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.972490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.972523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.972601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.972612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.972774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.972786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.972924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.972936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.973002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.973012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.973149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.973160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.973239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.973249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.973319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.973330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.335 qpair failed and we were unable to recover it. 00:27:16.335 [2024-07-16 01:32:41.973422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.335 [2024-07-16 01:32:41.973433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.973577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.973588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.973668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.973678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.973739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.973748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.973824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.973834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.973896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.973907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.974065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.974077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.974162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.974172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.974307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.974319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.974474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.974486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.974565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.974575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.974647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.974657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.974785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.974816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.974929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.974962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.975146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.975177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.975285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.975315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.975432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.975462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.975570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.975600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.975776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.975806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.975929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.975959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.976223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.976239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.976310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.976324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.976434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.976450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.976546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.976562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.976650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.976666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.976752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.976764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.976840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.976853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.976932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.976974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.977100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.977130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.977373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.977404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.977534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.977565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.977740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.977771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.977882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.977912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.978015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.978026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.978175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.978186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.978343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.336 [2024-07-16 01:32:41.978357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.336 qpair failed and we were unable to recover it. 00:27:16.336 [2024-07-16 01:32:41.978434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.978444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.978663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.978674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.978826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.978856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.979102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.979133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.979246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.979277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.979454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.979486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.979625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.979657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.979795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.979826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.979955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.979986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.980121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.980152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.980282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.980314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.980463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.980496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.980675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.980705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.980818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.980849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.981039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.981071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.981182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.981212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.981321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.981372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.981591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.981627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.981826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.981856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.981965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.981995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.982106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.982135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.982407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.982440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.982693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.982723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.982933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.982963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.983072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.983102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.983198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.983229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.983368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.983400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.983643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.983674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.983881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.983911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.984097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.984113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.984279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.984296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.984447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.984463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.984619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.984649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.984776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.984807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.337 [2024-07-16 01:32:41.984933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.337 [2024-07-16 01:32:41.984965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.337 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.985091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.985121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.985355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.985387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.985564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.985594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.985778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.985808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.986010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.986040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.986144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.986160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.986237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.986253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.986370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.986401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.986556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.986586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.986779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.986824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.986941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.986957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.987069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.987085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.987324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.987348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.987486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.987498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.987653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.987684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.987862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.987893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.988074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.988104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.988239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.988250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.988329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.988344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.988424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.988435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.988631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.988650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.988757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.988772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.988927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.988943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.989059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.989071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.989268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.989279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.989374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.989386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.989466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.989476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.989549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.989559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.989722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.989733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.989803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.989812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.989895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.989904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.990102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.990113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.990197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.990208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.990284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.990294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.990381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.990392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.990548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.990559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.990646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.990657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.990794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.990804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.990963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.990974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.991122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.338 [2024-07-16 01:32:41.991132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.338 qpair failed and we were unable to recover it. 00:27:16.338 [2024-07-16 01:32:41.991201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.991211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.991289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.991300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.991371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.991381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.991445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.991455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.991599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.991610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.991757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.991768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.991859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.991870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.992012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.992023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.992101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.992112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.992180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.992192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.992334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.992351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.992421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.992431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.992579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.992590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.992665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.992675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.992815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.992825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.992974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.992984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.993059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.993069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.993201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.993212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.993291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.993300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.993380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.993390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.993531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.993542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.993616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.993625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.993705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.993715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.993859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.993870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.993932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.993942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.994015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.994024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.994095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.994105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.994171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.994181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.994259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.994268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.994366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.994378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.994448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.994458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.339 [2024-07-16 01:32:41.994545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.339 [2024-07-16 01:32:41.994555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.339 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.994686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.994696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.994762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.994772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.994836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.994846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.994984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.994995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.995060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.995070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.995202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.995213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.995286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.995297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.995376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.995386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.995455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.995465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.995529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.995539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.995617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.995626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.995698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.995709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.995848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.995859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.996006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.996017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.996152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.996162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.996246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.996256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.996402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.996413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.996500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.996513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.996670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.996681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.996758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.996768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.996852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.996862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.996929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.996940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.997085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.997095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.997162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.997172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.997331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.997347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.997512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.997523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.997609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.997620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.997756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.997767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.997895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.997906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.998058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.998069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.998142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.998152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.998228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.998238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.340 [2024-07-16 01:32:41.998324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.340 [2024-07-16 01:32:41.998334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.340 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:41.998478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:41.998490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:41.998566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:41.998576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:41.998709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:41.998719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:41.998855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:41.998866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:41.999065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:41.999076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:41.999157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:41.999167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:41.999249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:41.999259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:41.999493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:41.999504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:41.999654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:41.999664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:41.999821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:41.999831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:41.999927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:41.999938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.000038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.000058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.000153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.000169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.000347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.000364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.000530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.000546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.000639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.000655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.000799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.000815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.000972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.000984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.001074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.001085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.001245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.001257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.001388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.001400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.001490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.001501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.001635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.001646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.001716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.001726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.001868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.001881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.001961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.001971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.002167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.002178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.002315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.002326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.002484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.002496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.002640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.002650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.002792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.002804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.002882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.002892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.002970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.002979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.003057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.003067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.003146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.003157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.003251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.003262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.003365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.003377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.003444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.003454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.003626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.341 [2024-07-16 01:32:42.003637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.341 qpair failed and we were unable to recover it. 00:27:16.341 [2024-07-16 01:32:42.003712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.003722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.003790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.003799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.003866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.003876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.004016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.004027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.004186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.004197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.004269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.004280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.004351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.004361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.004459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.004471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.004539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.004548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.004635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.004646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.004722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.004733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.004815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.004826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.004969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.004979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.005045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.005054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.005183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.005194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.005272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.005283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.005381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.005392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.005525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.005536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.005602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.005611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.005689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.005699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.005774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.005786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.005849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.005859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.005925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.005935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.006004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.006014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.006144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.006155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.006283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.006296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.006371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.006384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.006460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.006471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.006538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.006548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.006697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.006708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.006932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.006942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.007008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.007018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.007097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.007107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.007189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.007199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.007283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.007293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.342 [2024-07-16 01:32:42.007359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.342 [2024-07-16 01:32:42.007369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.342 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.007462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.007473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.007545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.007556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.007617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.007627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.007779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.007791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.007863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.007874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.007960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.007971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.008037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.008046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.008111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.008121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.008253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.008264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.008424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.008435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.008514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.008526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.008595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.008606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.008681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.008692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.008784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.008796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.008860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.008870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.008947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.008960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.009029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.009039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.009172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.009183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.009247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.009257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.009326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.009341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.009408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.009418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.009617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.009628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.009697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.009707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.009774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.009785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.009916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.009926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.009996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.010006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.010078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.010089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.010240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.010251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.010387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.010399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.010486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.010500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.010658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.010669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.010814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.010825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.010964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.010975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.011108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.011120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.011205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.011216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.011360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.011371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.011571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.011582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.011669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.343 [2024-07-16 01:32:42.011680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.343 qpair failed and we were unable to recover it. 00:27:16.343 [2024-07-16 01:32:42.011811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.011822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.011889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.011899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.011987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.011998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.012074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.012085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.012151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.012161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.012241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.012252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.012340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.012351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.012424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.012435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.012567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.012577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.012651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.012662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.012732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.012743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.012828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.012838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.012926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.012937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.013007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.013018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.013090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.013099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.013154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.013164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.013312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.013323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.013389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.013399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.013574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.013592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.013805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.013821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.013913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.013928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.014068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.014079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.014257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.014267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.014495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.014507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.014589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.014600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.014735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.014746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.014889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.014900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.015033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.015044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.344 qpair failed and we were unable to recover it. 00:27:16.344 [2024-07-16 01:32:42.015123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.344 [2024-07-16 01:32:42.015134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.015222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.015232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.015323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.015333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.015480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.015493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.015570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.015582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.015654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.015665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.015865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.015876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.016028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.016039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.016172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.016183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.016345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.016356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.016436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.016449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.016520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.016531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.016689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.016700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.016841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.016852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.017052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.017062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.017187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.017197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.017328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.017342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.017410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.017422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.017514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.017524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.017657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.017667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.017730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.017740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.017871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.017882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.018018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.018028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.018112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.018122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.018209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.018219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.018303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.018313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.018379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.018391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.018533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.018545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.018610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.018620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.018761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.018771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.018865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.018888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.018995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.019011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.019176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.019192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.019379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.019392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.019454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.019464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.019687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.019698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.019783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.019793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.019937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.345 [2024-07-16 01:32:42.019948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.345 qpair failed and we were unable to recover it. 00:27:16.345 [2024-07-16 01:32:42.020024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.020035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.020108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.020118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.020264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.020275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.020346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.020357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.020444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.020455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.020584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.020595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.020687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.020698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.020776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.020786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.020865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.020875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.020966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.020977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.021107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.021118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.021192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.021203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.021423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.021434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.021592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.021603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.021681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.021692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.021772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.021783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.021877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.021887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.021962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.021972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.022045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.022056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.022144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.022154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.022313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.022323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.022404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.022417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.022483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.022494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.022694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.022705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.022848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.022858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.023000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.023010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.023156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.023173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.023252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.023262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.023341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.023352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.023424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.023435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.023569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.023580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.023723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.023734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.023794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.023806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.023871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.023880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.023943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.023954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.024104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.024114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.024185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.024195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.024275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.346 [2024-07-16 01:32:42.024286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.346 qpair failed and we were unable to recover it. 00:27:16.346 [2024-07-16 01:32:42.024350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.024361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.024506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.024517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.024602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.024613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.024698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.024710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.024840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.024851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.024916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.024925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.025067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.025078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.025151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.025162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.025248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.025259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.025389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.025401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.025543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.025554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.025693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.025704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.025855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.025866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.025941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.025951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.026033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.026044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.026196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.026206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.026280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.026291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.026382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.026394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.026470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.026481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.026553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.026564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.026637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.026647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.026784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.026795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.026867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.026879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.026955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.026966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.027045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.027055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.027116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.027127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.027202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.027213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.027436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.027448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.027531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.027542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.027743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.027753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.027817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.027827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.027964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.027974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.028057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.028068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.028208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.028219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.028300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.028312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.028461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.028472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.028565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.028576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.028787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.347 [2024-07-16 01:32:42.028798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.347 qpair failed and we were unable to recover it. 00:27:16.347 [2024-07-16 01:32:42.028933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.028943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.029022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.029033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.029097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.029108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.029201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.029211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.029352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.029363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.029516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.029527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.029670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.029681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.029832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.029842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.029936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.029946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.030079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.030090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.030231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.030242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.030374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.030386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.030453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.030462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.030547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.030558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.030692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.030722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.030839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.030869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.031004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.031034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.031143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.031173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.031290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.031301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.031467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.031479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.031613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.031624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.031857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.031868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.032013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.032043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.032188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.032219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.032357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.032388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.032568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.032598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.032721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.032752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.032926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.032937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.033105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.033116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.033202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.033213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.033369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.033381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.033460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.033496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.033681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.033712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.033814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.033844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.033956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.033986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.034168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.034178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.034279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.034314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.034489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.034523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.034702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.034731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.034853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.348 [2024-07-16 01:32:42.034883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.348 qpair failed and we were unable to recover it. 00:27:16.348 [2024-07-16 01:32:42.035068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.035079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.035283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.035313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.035547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.035578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.035753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.035783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.035996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.036006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.036156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.036167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.036411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.036447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.036576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.036607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.036797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.036827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.036958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.036988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.037135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.037166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.037357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.037389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.037507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.037538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.037659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.037690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.037812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.037843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.038040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.038069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.038195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.038207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.038289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.038300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.038377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.038389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.038479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.038491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.038655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.038667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.038736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.038747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.038816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.038827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.038905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.038916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.038990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.039001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.039098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.039108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.039224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.039234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.039299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.039309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.039385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.039417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.039544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.039574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.039755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.039785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.039956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.039968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.040102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.040113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.349 [2024-07-16 01:32:42.040198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.349 [2024-07-16 01:32:42.040209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.349 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.040290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.040301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.040376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.040387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.040461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.040475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.040603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.040615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.040769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.040780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.040914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.040925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.041017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.041027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.041157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.041168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.041328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.041343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.041420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.041430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.041494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.041504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.041582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.041594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.041733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.041745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.041884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.041894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.041987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.041997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.042651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.042675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.042777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.042790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.042992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.043023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.043276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.043308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.043369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3b040 (9): Bad file descriptor 00:27:16.350 [2024-07-16 01:32:42.043539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.043581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.043699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.043730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.043866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.043897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.044085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.044102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.044256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.044272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.044376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.044392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.044579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.044595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.044680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.044698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.044795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.044812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.044906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.044922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.045107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.045123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.045227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.045244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.045450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.045465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.045551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.045563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.045709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.045723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.045794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.045804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.045883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.045897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.045974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.045985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.046076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.046087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.350 [2024-07-16 01:32:42.046236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.350 [2024-07-16 01:32:42.046247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.350 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.046381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.046395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.046472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.046482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.046629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.046640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.046742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.046779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.046872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.046891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.046970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.046986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.047142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.047157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.047254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.047271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.047383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.047400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.047558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.047573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.047728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.047744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.047826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.047842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.047997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.048009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.048102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.048113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.048173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.048182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.048257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.048266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.048352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.048364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.048548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.048560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.048625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.048635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.048710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.048721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.048866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.048877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.049024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.049035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.049105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.049115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.049299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.049310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.049384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.049394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.049467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.049476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.049627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.049638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.049781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.049793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.049874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.049885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.049968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.049979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.050062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.050073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.050202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.050213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.050346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.050360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.050444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.050456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.050586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.050597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.050663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.050673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.050770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.050781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.050862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.050872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.051118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.051133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.051219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.351 [2024-07-16 01:32:42.051231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.351 qpair failed and we were unable to recover it. 00:27:16.351 [2024-07-16 01:32:42.051311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.051322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.051469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.051480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.051550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.051559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.051634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.051647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.051713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.051723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.051854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.051866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.051955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.051965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.052040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.052050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.052121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.052131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.052196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.052206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.052279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.052288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.052439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.052450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.052529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.052552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.052628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.052638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.052698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.052708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.052786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.052798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.052978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.052989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.053052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.053062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.053198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.053225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.053295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.053307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.053407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.053419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.053491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.053502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.053578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.053589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.053718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.053729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.053798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.053808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.053958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.053970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.054049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.054059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.054216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.054227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.054308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.054319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.054456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.054468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.054547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.054558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.054632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.054644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.054791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.054802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.054872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.054882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.055045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.055056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.055203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.055214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.055295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.055306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.055384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.055395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.055538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.055549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.352 [2024-07-16 01:32:42.055622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.352 [2024-07-16 01:32:42.055633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.352 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.055804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.055815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.055899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.055909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.055991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.056002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.056075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.056087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.056162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.056173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.056249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.056261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.056330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.056345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.056488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.056499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.056576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.056587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.056675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.056686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.056767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.056779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.056848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.056859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.056924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.056934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.057009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.057020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.057088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.057098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.057232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.057243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.057317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.057328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.057399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.057410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.057474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.057484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.057574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.057585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.057734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.057746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.057828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.057838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.057996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.058007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.058154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.058165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.058234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.058246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.058329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.058347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.058433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.058445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.058513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.058523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.058601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.058612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.058800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.058812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.058883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.058893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.058973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.058983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.059117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.059128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.353 qpair failed and we were unable to recover it. 00:27:16.353 [2024-07-16 01:32:42.059198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.353 [2024-07-16 01:32:42.059209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.059407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.059418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.059514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.059525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.059594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.059604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.059734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.059746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.059828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.059839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.059983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.059994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.060078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.060090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.060162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.060173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.060246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.060258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.060402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.060415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.060495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.060506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.060586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.060597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.060682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.060693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.060829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.060839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.060914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.060925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.061009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.061020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.061092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.061103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.061180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.061190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.061255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.061265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.061332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.061347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.061418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.061429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.061582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.061593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.061672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.061684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.061756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.061767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.061848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.061859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.061926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.061936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.062070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.062081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.062144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.062154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.062236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.062247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.062401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.062414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.062487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.062499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.062567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.062578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.062710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.062721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.062790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.062802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.062902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.062913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.063047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.063059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.354 [2024-07-16 01:32:42.063135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.354 [2024-07-16 01:32:42.063147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.354 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.063222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.063233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.063313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.063324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.063535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.063547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.063615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.063627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.063764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.063776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.063930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.063942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.064082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.064094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.064169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.064180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.064254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.064266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.064422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.064436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.064576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.064588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.064789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.064800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.064944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.064957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.065046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.065057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.065124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.065135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.065200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.065211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.065273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.065284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.065361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.065373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.065443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.065454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.065526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.065537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.065670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.065681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.065830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.065841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.065973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.065984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.066121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.066132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.066211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.066222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.066367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.066379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.066466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.066478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.066562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.066574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.066722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.066734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.066811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.066821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.066903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.066914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.066981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.066992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.067084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.067096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.067200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.067211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.067344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.067355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.067433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.067445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.067522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.067532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.355 qpair failed and we were unable to recover it. 00:27:16.355 [2024-07-16 01:32:42.067602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.355 [2024-07-16 01:32:42.067613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.067677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.067688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.067857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.067869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.067933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.067945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.068118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.068130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.068203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.068214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.068291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.068302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.068453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.068466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.068543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.068554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.068676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.068688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.068774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.068786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.068848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.068858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.068994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.069005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.069071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.069083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.069166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.069177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.069246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.069260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.069396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.069407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.069550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.069561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.069709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.069721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.069794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.069805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.069876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.069888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.070018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.070029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.070120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.070132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.070267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.070278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.070421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.070432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.070502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.070513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.070586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.070597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.070733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.070744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.070885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.070896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.071034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.071045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.071125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.071136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.071221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.071232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.071313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.071324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.071401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.071413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.071499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.071510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.071659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.356 [2024-07-16 01:32:42.071670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.356 qpair failed and we were unable to recover it. 00:27:16.356 [2024-07-16 01:32:42.071756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.071768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.071838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.071849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.071929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.071941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.072044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.072054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.072126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.072137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.072216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.072227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.072374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.072387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.072471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.072483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.072573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.072585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.072737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.072749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.072828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.072840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.072919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.072930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.073014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.073026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.073160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.073171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.073242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.073254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.073356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.073368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.073447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.073458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.073534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.073546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.073622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.073634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.073707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.073721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.073792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.073803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.073959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.073989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.074101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.074131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.074239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.074268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.074372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.074404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.074522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.074552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.074667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.074697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.074873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.074903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.075081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.075111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.075301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.075346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.075512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.075523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.075695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.075706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.075845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.075857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.075938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.075950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.076102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.076133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.077415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.077436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.077617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.077629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.357 [2024-07-16 01:32:42.077703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.357 [2024-07-16 01:32:42.077714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.357 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.077844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.077855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.078023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.078054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.078232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.078262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.078440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.078473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.078665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.078695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.078979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.079009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.079132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.079163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.079353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.079384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.079508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.079539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.079729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.079759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.079879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.079910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.080045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.080077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.080268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.080298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.080566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.080601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.080736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.080767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.080946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.080977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.081169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.081199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.081419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.081431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.081566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.081577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.081723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.081734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.081825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.081837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.081904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.081917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.082060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.082091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.082267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.082297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.082501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.082532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.082652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.082682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.083636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.083657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.083874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.083887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.084098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.084109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.084251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.084262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.084466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.084502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.084632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.084663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.084780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.084811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.084948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.358 [2024-07-16 01:32:42.084979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.358 qpair failed and we were unable to recover it. 00:27:16.358 [2024-07-16 01:32:42.085179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.085190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.085355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.085387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.085629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.085661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.085863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.085895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.086018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.086030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.086097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.086107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.086243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.086254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.086945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.086976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.087078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.087090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.087245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.087257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.087336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.087350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.087441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.087452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.087561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.087592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.087714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.087745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.088056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.088124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.088235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.088253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.088352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.088370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.088455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.088471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.088547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.088564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.088746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.088762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.088934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.088949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.089050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.089066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.089235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.089266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.089409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.089441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.089583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.089613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.089816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.089846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.089991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.090008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.090109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.090130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.090242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.090259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.090346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.090363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.090466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.090486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.090556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.090568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.090642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.090652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.090721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.090732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.090797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.090807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.090877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.090887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.090953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.090964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.359 qpair failed and we were unable to recover it. 00:27:16.359 [2024-07-16 01:32:42.091040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.359 [2024-07-16 01:32:42.091081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.091268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.091299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.092440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.092461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.092719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.092731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.092943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.092975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.093171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.093203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.093443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.093455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.093536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.093548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.093634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.093668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.093863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.093895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.094034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.094065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.094209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.094247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.094445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.094476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.094599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.094629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.094836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.094866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.095054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.095084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.095216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.095247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.095465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.095499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.095697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.095728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.095864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.095895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.096108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.096139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.096263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.096294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.096519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.096532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.096605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.096615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.096699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.096709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.096859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.096894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.097154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.097185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.097314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.097356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.097478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.097509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.097639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.097669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.097852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.097889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.098080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.098112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.098330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.098371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.098500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.098531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.098773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.098804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.098934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.098965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.099070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.099097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.099264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.360 [2024-07-16 01:32:42.099297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.360 qpair failed and we were unable to recover it. 00:27:16.360 [2024-07-16 01:32:42.099419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.099451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.099597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.099627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.099808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.099838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.100014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.100045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.100165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.100197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.100400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.100413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.100498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.100509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.100667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.100680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.100781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.100813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.100939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.100970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.101089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.101119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.101230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.101261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.101407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.101418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.101553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.101564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.101707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.101739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.101922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.101954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.102075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.102106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.102306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.102317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.102532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.102544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.102621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.102631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.102728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.102770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.102911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.102942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.103202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.103233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.103365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.103396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.103639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.103671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.103790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.103822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.104019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.104051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.104176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.104207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.104365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.104378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.104466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.104476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.104624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.104635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.104800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.104831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.104945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.104982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.105092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.105124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.105242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.105273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.105454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.105466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.105543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.105554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.105626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.105636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.105706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.105717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.105798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.105808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.361 qpair failed and we were unable to recover it. 00:27:16.361 [2024-07-16 01:32:42.105876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.361 [2024-07-16 01:32:42.105886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.106045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.106076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.106183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.106214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.106459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.106492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.106670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.106701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.106827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.106858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.106985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.107016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.107126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.107137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.107234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.107274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.107373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.107390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.107473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.107488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.107588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.107603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.107694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.107710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.107795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.107810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.108026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.108043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.108130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.108145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.108229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.108244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.108422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.108454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.108642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.108672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.108859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.108895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.109016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.109032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.109111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.109126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.109296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.109326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.109472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.109503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.109622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.109652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.109831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.109861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.110054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.110086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.110254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.110284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.110411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.110443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.110708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.110724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.110873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.110889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.111000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.111031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.362 [2024-07-16 01:32:42.111216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.362 [2024-07-16 01:32:42.111253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.362 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.111469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.111500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.111626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.111657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.111773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.111803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.113030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.113060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.113244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.113262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.113416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.113433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.113586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.113630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.113827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.113859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.114061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.114092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.114274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.114291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.114455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.114488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.114687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.114718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.114849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.114881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.115124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.115156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.115358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.115390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.115513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.115545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.115753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.115784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.115898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.115929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.116039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.116055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.116200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.116216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.116375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.116392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.116500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.116516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.116658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.116675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.116835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.116851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.117008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.117039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.117229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.117260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.117392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.117430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.117556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.117588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.117838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.117870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.117986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.118017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.118142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.118180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.118248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.118258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.118351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.118361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.118519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.118551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.118756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.118787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.118899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.118940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.119082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.119134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.363 qpair failed and we were unable to recover it. 00:27:16.363 [2024-07-16 01:32:42.119302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.363 [2024-07-16 01:32:42.119321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.119432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.119449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.119529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.119547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.119690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.119704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.119774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.119787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.119844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.119858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.119931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.119942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.120092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.120102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.120175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.120185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.120350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.120364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.120513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.120525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.120697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.120708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.120859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.120871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.121001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.121013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.121149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.121161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.121293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.121304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.121464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.121475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.121627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.121638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.121794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.121805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.121881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.121891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.121973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.121983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.122047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.122058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.122128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.122138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.122202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.122212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.122271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.122281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.122364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.122375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.122445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.122456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.122533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.122543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.122679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.122689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.122818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.122828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.122905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.122915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.123061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.123071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.123145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.123156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.123287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.123298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.123376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.123386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.123529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.123539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.123622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.123633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.123785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.123796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.123935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.123946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.124078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.124088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.124184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.364 [2024-07-16 01:32:42.124194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.364 qpair failed and we were unable to recover it. 00:27:16.364 [2024-07-16 01:32:42.124334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.124350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.124428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.124438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.124577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.124588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.124662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.124672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.124809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.124820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.124957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.124968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.125117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.125128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.125289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.125301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.125381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.125392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.125457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.125468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.125559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.125570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.125660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.125670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.125749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.125759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.125925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.125936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.126010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.126020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.126166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.126177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.126254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.126264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.126346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.126357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.126423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.126433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.126585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.126596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.126672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.126682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.126815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.126826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.126974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.126986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.127070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.127081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.127235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.127246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.127321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.127330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.127419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.127429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.127578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.127589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.127671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.127684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.127752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.127762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.127838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.127849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.128000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.128011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.128179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.128190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.128272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.128283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.128363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.128374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.128443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.128453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.128589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.128599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.128677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.128688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.128822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.128833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.128986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.128998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.129081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.365 [2024-07-16 01:32:42.129092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.365 qpair failed and we were unable to recover it. 00:27:16.365 [2024-07-16 01:32:42.129182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.129193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.129258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.129269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.129344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.129355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.129500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.129511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.129684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.129695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.129829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.129840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.129902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.129912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.130068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.130080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.130210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.130220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.130300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.130311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.130399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.130410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.130485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.130495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.130639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.130650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.130731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.130743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.130814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.130825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.130904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.130916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.131063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.131075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.131157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.131167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.131308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.131320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.131403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.131413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.131550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.131561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.131643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.131655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.131717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.131727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.131801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.131811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.131942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.131954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.132111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.132122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.132218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.132229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.132295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.132309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.132378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.132389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.132463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.132473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.132532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.132542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.132694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.132705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.132836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.132846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.132933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.132943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.133040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.133051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.133116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.133126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.133255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.133265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.133466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.133477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.133576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.133587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.133718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.133729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.366 [2024-07-16 01:32:42.133818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.366 [2024-07-16 01:32:42.133829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.366 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.133913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.133925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.134087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.134098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.134231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.134242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.134375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.134386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.134517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.134528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.134659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.134671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.134733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.134743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.134884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.134896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.135039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.135051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.135125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.135136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.135204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.135214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.135359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.135371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.135477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.135489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.135555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.135565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.135697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.135712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.135845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.135857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.136001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.136012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.136166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.136177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.136251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.136262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.136401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.136413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.136566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.136577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.136804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.136815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.136877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.136887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.137033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.137045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.137118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.137129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.137198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.137208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.137344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.137358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.137439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.137450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.137529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.137541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.137620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.137632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.137717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.137728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.137791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.137802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.137946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.137958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.138098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.138109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.367 qpair failed and we were unable to recover it. 00:27:16.367 [2024-07-16 01:32:42.138179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.367 [2024-07-16 01:32:42.138189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.138269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.138280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.138411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.138423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.138520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.138532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.138605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.138617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.138696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.138708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.138937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.138948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.139049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.139061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.139210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.139221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.139467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.139479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.139615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.139626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.139770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.139781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.139864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.139875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.139965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.139977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.140059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.140070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.140136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.140147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.140222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.140243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.140309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.140320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.140485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.140498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.140703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.140715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.140847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.140858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.140948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.140960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.141028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.141039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.141115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.141126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.141221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.141233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.141297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.141309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.141446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.141460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.141533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.141545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.141682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.141694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.141762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.141774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.141906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.141918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.141984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.141995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.142080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.142094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.142228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.142240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.142303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.142313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.142390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.142402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.142581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.142593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.142677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.142689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.142770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.142782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.142926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.142938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.143070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.143082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.368 qpair failed and we were unable to recover it. 00:27:16.368 [2024-07-16 01:32:42.143218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.368 [2024-07-16 01:32:42.143229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.143402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.143414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.143512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.143524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.143602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.143614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.143699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.143711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.143803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.143815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.143960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.143973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.144175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.144187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.144345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.144361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.144512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.144524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.144614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.144626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.144690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.144701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.144865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.144877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.144962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.144975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.145121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.145133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.145267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.145279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.145459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.145472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.145540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.145552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.145649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.145661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.145737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.145749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.145948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.145960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.146102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.146114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.146200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.146211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.146358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.146370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.146513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.146525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.146724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.146736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.146814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.146826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.146888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.146901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.147004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.147016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.147164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.147176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.147343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.147355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.147435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.147449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.147524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.147536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.147621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.147633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.147774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.147786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.147859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.147870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.147960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.369 [2024-07-16 01:32:42.147972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.369 qpair failed and we were unable to recover it. 00:27:16.369 [2024-07-16 01:32:42.148115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.148127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.148259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.148271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.148368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.148381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.148484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.148497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.148636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.148648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.148848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.148860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.148951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.148964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.149036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.149048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.149134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.149146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.149371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.149383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.149533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.149545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.149687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.149699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.149779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.149791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.149862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.149873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.150018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.150030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.150164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.150177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.150327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.150343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.150475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.150488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.150562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.150575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.150644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.150657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.150810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.150822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.150897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.150908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.151076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.151088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.151221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.151234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.151301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.151313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.151406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.151418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.151482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.151493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.151647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.151659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.151809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.151821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.152023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.152035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.152117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.152129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.152269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.152281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.152426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.152440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.152522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.152534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.152621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.152635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.152789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.152801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.153000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.153012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.153099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.153111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.153193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.153205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.153272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.370 [2024-07-16 01:32:42.153284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.370 qpair failed and we were unable to recover it. 00:27:16.370 [2024-07-16 01:32:42.153362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.153374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.153512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.153525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.153664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.153676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.153819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.153831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.154057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.154069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.154154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.154166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.154299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.154311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.154441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.154454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.154588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.154600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.154674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.154686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.154821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.154833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.154996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.155008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.155086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.155098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.155188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.155200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.155272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.155284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.155366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.155378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.155543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.155556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.155700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.155713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.155890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.155903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.156050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.156063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.156144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.156157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.156313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.156325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.156406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.156420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.156501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.156514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.156583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.156596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.156673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.156685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.156844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.156857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.157027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.157040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.157125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.157138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.157214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.157226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.157291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.157303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.157442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.157454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.371 qpair failed and we were unable to recover it. 00:27:16.371 [2024-07-16 01:32:42.157542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.371 [2024-07-16 01:32:42.157554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.157717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.157729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.157982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.157999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.158069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.158081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.158284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.158296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.158446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.158460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.158682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.158695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.158846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.158858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.159014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.159027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.159180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.159192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.159400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.159414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.159511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.159524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.159722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.159735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.159889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.159902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.160105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.160118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.160263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.160276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.160371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.160384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.160530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.160543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.160681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.160694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.160921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.160934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.161016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.161029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.161104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.161117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.161325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.161343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.161543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.161556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.161646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.161658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.161863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.161876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.162018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.162031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.162161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.162174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.162324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.162342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.162510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.162537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.162626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.162642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.162817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.162833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.162909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.162924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.163093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.163109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.163290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.163306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.163450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.163465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.163627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.163639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.163790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.163802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.164004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.164016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.164119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.164131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.164212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.164224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.372 qpair failed and we were unable to recover it. 00:27:16.372 [2024-07-16 01:32:42.164306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.372 [2024-07-16 01:32:42.164318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.164419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.164435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.164641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.164653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.164750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.164762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.164838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.164850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.165053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.165065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.165152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.165164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.165269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.165281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.165374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.165387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.165451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.165462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.165536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.165548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.165687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.165699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.165846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.165858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.165955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.165967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.166120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.166132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.166216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.166227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.166430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.166442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.166531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.166543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.166627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.166639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.166793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.166804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.166952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.166965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.167109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.167122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.167193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.167204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.167301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.167313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.167377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.167389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.167474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.167485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.167626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.167638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.167702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.167713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.167796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.167807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.167951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.167963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.168100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.168111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.168195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.168207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.168355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.168370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.168448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.168460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.168546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.168559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.168641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.168653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.168733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.168746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.168883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.168895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.169065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.169078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.373 [2024-07-16 01:32:42.169212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.373 [2024-07-16 01:32:42.169224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.373 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.169372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.169385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.169535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.169550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.169690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.169702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.169953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.169966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.170199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.170212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.170389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.170402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.170557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.170570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.170651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.170663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.170737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.170749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.170994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.171007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.171155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.171168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.171374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.171387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.171533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.171546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.171799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.171811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.171950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.171963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.172123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.172136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.172289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.172302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.172436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.172451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.172547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.172560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.172796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.172808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.172964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.172977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.173157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.173169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.173325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.173343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.173514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.173526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.173632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.173644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.173797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.173810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.173893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.173906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.174065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.174077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.174166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.174178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.174277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.174289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.174421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.174434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.174509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.174532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.174634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.174646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.174733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.174745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.174946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.174958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.175035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.175046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.175136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.175148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.175282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.175294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-07-16 01:32:42.175372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-07-16 01:32:42.175383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.175589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.175601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.175770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.175781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.175873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.175888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.175974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.175985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.176159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.176171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.176324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.176341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.176435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.176447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.176678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.176690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.176772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.176784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.176889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.176901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.176980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.176991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.177197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.177208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.177303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.177315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.177399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.177412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.177545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.177557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.177623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.177634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.177729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.177742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.177820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.177832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.177901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.177912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.177992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.178004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.178143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.178155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.178299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.178312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.178383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.178395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.178546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.178558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.178638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.178649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.178729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.178741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.178817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.178828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.178974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.178986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.179207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.179220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.179369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.179381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.179539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.179551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.179633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.179646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.179793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.179805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.179883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.179895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.179976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.179988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.180062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.180074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.180232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.180244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.180316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.180327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-07-16 01:32:42.180502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-07-16 01:32:42.180516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.180607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.180619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.180719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.180732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.180939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.180951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.181110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.181124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.181263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.181275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.181346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.181357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.181455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.181467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.181551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.181563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.181635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.181646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.181731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.181743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.181875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.181887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.182023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.182035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.182107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.182118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.182252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.182264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.182346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.182358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.182450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.182462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.182550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.182562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.182708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.182720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.182879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.182891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.183026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.183038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.183126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.183137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.183269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.183280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.183392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.183404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.183540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.183552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.183717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.183729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.183882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.183894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.184028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.184040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.184197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.184209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.184296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.184308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.184401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.184414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.184553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.184566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.184703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.184716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.184859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.184871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.185019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.185031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.185098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.185109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.185201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.185213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.185315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.185326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.185457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.185494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.185597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.185616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.185723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.185739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.185900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-07-16 01:32:42.185916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-07-16 01:32:42.185997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.186012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.186216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.186232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.186311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.186331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.186489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.186506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.186599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.186615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.186701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.186717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.186878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.186894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.187047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.187063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.187146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.187161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.187244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.187260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.187412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.187429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.187589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.187605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.187680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.187695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.187848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.187864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.187964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.187980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.188119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.188134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.188301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.188317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.188469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.188485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.188573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.188589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.188724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.188740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.188849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.188865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.189009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.189024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.189123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.189139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.189296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.189312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.189467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.189483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.189625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.189641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.189806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.189822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.189924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.189940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.190016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.190032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.190216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.190235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.190320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.190335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.190434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.190449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.190544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.190560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.190726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.190742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.190901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.190917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.191001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.191016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.191221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.191233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.191318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.191330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.191542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.191553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.191618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.191629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.191758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.191770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.191842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.191854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.191939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.191951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.192157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-07-16 01:32:42.192169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-07-16 01:32:42.192311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.192323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.192548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.192561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.192656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.192668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.192762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.192774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.192852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.192864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.193000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.193013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.193077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.193088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.193186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.193198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.193341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.193353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.193502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.193515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.193589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.193601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.193673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.193684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.193801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.193813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.194015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.194027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.194110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.194122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.194215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.194228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.194427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.194440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.194518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.194530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.194596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.194608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.194811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.194823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.194962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.194975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.195108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.195122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.195294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.195307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.195443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.195456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.195526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.195539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.195686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.195701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.195853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.195865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.195941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.195953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.196134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.196146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.196284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.196297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.196445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.196459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.196549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.196561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.196698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.196710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.196782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.196794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.196884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.196897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.196974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.196986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.197132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.197145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.197238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.197250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.197386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.197399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.197560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.197572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.197799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-07-16 01:32:42.197812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-07-16 01:32:42.197904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.197916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.197982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.197994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.198077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.198090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.198172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.198185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.198288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.198301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.198449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.198462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.198664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.198676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.198811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.198824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.198908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.198921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.199072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.199085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.199164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.199176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.199240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.199251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.199384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.199398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.199555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.199568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.199732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.199744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.199824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.199836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.200064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.200076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.200212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.200224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.200294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.200306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.200460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.200474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.200701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.200714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.200798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.200811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.200897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.200910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.200981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.200993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.201221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.201235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.201321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.201334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.201495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.201508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.201596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.201608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.201690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.201703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.201877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.201890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.201964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.201976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.202138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.202151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.202304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.202316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.202563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.202576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.202647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.202659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.202757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.202769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.202935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.202948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.203130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.203142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.203297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.203309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.203450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-07-16 01:32:42.203463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-07-16 01:32:42.203560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.203573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.203663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.203676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.203875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.203887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.204102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.204114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.204261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.204273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.204347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.204362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.204588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.204601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.204764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.204776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.204984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.204997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.205137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.205149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.205304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.205317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.205469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.205482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.205565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.205578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.205745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.205757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.205882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.205894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.205958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.205970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.206156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.206169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.206344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.206357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.206537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.206549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.206693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.206706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.206776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.206787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.206928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.206940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.207151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.207164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.207308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.207320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-07-16 01:32:42.207527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-07-16 01:32:42.207543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.207676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.207689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.207893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.207906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.208008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.208021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.208183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.208196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.208402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.208416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.208575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.208587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.208770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.208782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.208931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.208944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.209033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.209046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.209130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.209143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.209293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.209306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.209451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.209463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.209609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.209621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.209839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.209852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.209941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.209953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.210029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.210041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.210146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.210158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.210236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.210249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.210383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.210396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.210468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.210480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.210634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.210646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.210740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.210753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.210909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.210922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.211061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.211074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.211223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.211236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.211374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.211387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.211500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.211513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.211592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.211605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.211769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.211782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.211851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.211862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.212051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.212064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.212307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.212320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.212418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.212432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.212526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.212539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.212632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.212645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.212794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.212808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.212953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.212965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.213048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.213061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.213210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.213222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-07-16 01:32:42.213298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-07-16 01:32:42.213313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.213404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.213416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.213550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.213562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.213633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.213645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.213739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.213751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.213831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.213844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.213991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.214004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.214145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.214157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.214235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.214247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.214393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.214406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.214544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.214557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.214700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.214713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.214861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.214874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.214944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.214956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.215102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.215114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.215281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.215294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.215399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.215412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.215510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.215523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.215618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.215631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.215707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.215719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.215795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.215806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.215873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.215884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.215962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.215974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.216128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.216141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.216228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.216241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.216319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.216332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.216559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.216573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.216658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.216671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.216851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.216864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.217019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.217031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.217131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.217143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.217211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.217223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.217307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.217320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.217496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.217509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.217593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.217605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.217664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.217676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.217829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.217841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.217931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.217944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.218196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.218209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.218310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-07-16 01:32:42.218323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-07-16 01:32:42.218529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.218544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.218689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.218702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.218778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.218790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.218874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.218887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.219048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.219061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.219199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.219212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.219311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.219323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.219408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.219421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.219488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.219501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.219587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.219600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.219734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.219747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.219883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.219895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.219970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.219982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.220213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.220225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.220304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.220317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.220503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.220516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.220670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.220683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.220776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.220788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.220950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.220963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.221124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.221137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.221362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.221376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.221461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.221473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.221615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.221628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.221696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.221707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.221774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.221785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.221989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.222002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.222088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.222100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.222174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.222186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.222283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.222295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.222437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.222450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.222524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.222535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.222620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.222632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.222722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.222734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.222813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.222826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.222899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.222911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.223044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.223057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.223231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.223244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.223335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.223352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.223489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.223502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.223583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.223595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-07-16 01:32:42.223675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-07-16 01:32:42.223689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.223834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.223846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.223976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.223988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.224067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.224079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.224214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.224226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.224381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.224394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.224603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.224615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.224778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.224790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.224872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.224883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.225027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.225039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.225176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.225188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.225273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.225284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.225456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.225468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.225624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.225636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.225727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.225739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.225836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.225849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.225929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.225940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.226019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.226030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.226170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.226182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.226346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.226358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.226459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.226471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.226617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.226629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.226716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.226732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.226839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.226852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.226931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.226943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.227092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.227105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.227181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.227194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.227417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.227430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.227509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.227522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.227671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.227683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.227829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.227841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-07-16 01:32:42.228000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-07-16 01:32:42.228013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.228101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.228114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.228249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.228262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.228397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.228410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.228570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.228583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.228672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.228684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.228841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.228854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.228999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.229012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.229096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.229108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.229250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.229265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.229361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.229374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.229539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.229552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.229694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.229706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.229851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.229864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.229970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.229983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.230175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.230187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.230420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.230433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.230594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.230607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.230806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.230819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.230882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.230894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.230970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.230983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.231074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.231087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.231161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.231174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.231405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.231418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.231582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.231595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.231692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.231704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.231855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.231867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.232002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.232015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.232075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.232087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.232239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.232253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.232455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.232469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.232603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.232616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.232815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.232827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.232942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.232954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.233017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.233028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.233159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.233171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.233270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.233283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.233454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.233467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.233582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.233594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.233784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.233797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.233951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.233964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-07-16 01:32:42.234131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-07-16 01:32:42.234144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.234290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.234302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.234514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.234527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.234677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.234689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.234909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.234922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.235058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.235070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.235207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.235220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.235374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.235386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.235525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.235540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.235628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.235640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.235790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.235803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.235950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.235963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.236170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.236183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.236412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.236425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.236529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.236542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.236605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.236617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.236830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.236842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.236991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.237004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.237099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.237112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.237283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.237296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.237362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.237374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.237604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.237616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.237699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.237712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.237799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.237811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.237892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.237905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.238041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.238054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.238190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.238203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.238286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.238299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.238376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.238389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.238464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.238477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.238544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.238556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.238697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.238710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.238798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.238809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.238947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.238960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.239046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.239059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.239220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.239253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.239358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.239378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.239631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.239647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.239807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.239824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.239919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.239935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-07-16 01:32:42.240078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-07-16 01:32:42.240094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.240249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.240265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.240352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.240368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.240585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.240601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.240744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.240759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.240903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.240919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.241009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.241025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.241192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.241207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.241419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.241434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.241577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.241589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.241677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.241689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.241829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.241841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.241955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.241967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.242044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.242055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.242217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.242229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.242374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.242386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.242525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.242538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.242603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.242614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.242697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.242709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.242789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.242801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.242868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.242879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.243018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.243030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.243119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.243131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.243276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.243289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.243368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.243381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.243463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.243475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.243556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.243568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.243731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.243743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.243829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.243842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.243975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.243987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.244165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.244177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.244250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.244262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.244329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.244358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.244512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.244525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.244664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.244676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.244761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.244774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.244865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.244878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.245007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.245020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.245082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.245094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.245166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.245178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.245313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.245326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-07-16 01:32:42.245479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-07-16 01:32:42.245493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.245647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.245660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.245751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.245763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.245907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.245919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.246051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.246064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.246149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.246161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.246247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.246259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.246357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.246372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.246456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.246469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.246638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.246650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.246740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.246753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.246955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.246968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.247171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.247184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.247321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.247334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.247425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.247438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.247521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.247534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.247669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.247682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.247775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.247787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.247947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.247959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.248167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.248180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.248385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.248398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.248475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.248488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.248719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.248732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.248885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.248897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.249110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.249122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.249215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.249227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.249456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.249469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.249563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.249575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.249817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.249830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.249967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.249980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.250090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.250103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.250170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.250181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.250286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.250298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.250368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.250380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.250466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.250479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.250623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.250636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.250710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.250723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.250857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.250869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.251032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.251044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.251139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.251152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-07-16 01:32:42.251318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-07-16 01:32:42.251330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.251410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.251422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.251507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.251520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.251658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.251670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.251756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.251769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.251849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.251861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.251967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.251979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.252066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.252081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.252251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.252264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.252422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.252437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.252526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.252539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.252674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.252687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.252835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.252847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.252940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.252952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.253095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.253108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.253184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.253197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.253275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.253286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.253421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.253435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.253527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.253539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.253687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.253699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.253814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.253826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.253988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.254000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.254141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.254154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.254242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.254256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.254344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.254357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.254500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.254513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.254645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.254658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.254746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.254759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.254922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.254934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.255084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.255096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.255210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.255222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.255370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.255383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.255587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.255600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.255751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-07-16 01:32:42.255764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-07-16 01:32:42.255843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.255861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.256012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.256027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.256124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.256141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.256395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.256412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.256500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.256521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.256668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.256684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.256845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.256861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.256939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.256955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.257134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.257150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.257292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.257308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.257404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.257419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.257573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.257590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.257685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.257701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.257785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.257804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.257883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.257899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.258069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.258085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.258256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.258272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.258343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.258364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.258456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.258472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.258561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.258577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.258666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.258682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.258859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.258875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.258966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.258983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.259191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.259207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.259376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.259392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.259491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.259504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.259589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.259602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.259697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.259710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.259795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.259807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.259998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.260011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.260151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.260164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.260240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.260252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.260349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.260365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.260432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.260443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.260592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.260605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.260683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.260695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.260839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.260851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.260997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.261010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.261147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.261160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.261364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-07-16 01:32:42.261377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-07-16 01:32:42.261462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.261477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.261560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.261572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.261643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.261655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.261852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.261864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.261946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.261959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.262097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.262109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.262202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.262215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.262368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.262381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.262557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.262570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.262670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.262682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.262781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.262793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.262863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.262875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.262964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.262977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.263124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.263136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.263367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.263380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.263627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.263639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.263710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.263721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.263791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.263803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.263957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.263969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.264059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.264071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.264213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.264225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.264310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.264323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.264408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.264422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.264535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.264547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.264648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.264660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.264806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.264818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.264896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.264909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.265049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.265062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.265131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.265143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.265284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.265296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.265374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.265386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.265475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.265487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.265585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.265597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.265673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.265686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.265773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.265785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.265872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.265885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.266046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.266058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.266196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.266208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.266283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.266296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.266377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-07-16 01:32:42.266390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-07-16 01:32:42.266533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.266548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.266633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.266646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.266728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.266741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.266965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.266978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.267066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.267079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.267188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.267201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.267271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.267283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.267350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.267362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.267563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.267575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.267650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.267663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.267796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.267808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.267940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.267953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.268032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.268045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.268188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.268201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.268359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.268373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.268459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.268472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.268562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.268575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.268657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.268669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.268805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.268818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.268967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.268979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.269121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.269134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.269251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.269263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.269333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.269363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.269455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.269468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.269621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.269633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.269708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.269721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.269854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.269867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.269966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.269978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.270132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.270145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.270282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.270295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.270377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.270390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.270486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.270498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.270689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.270702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.270791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.270804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.270882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.270895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.271108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.271120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.271194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.271207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.271364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.271378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.271533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.271546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.271616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-07-16 01:32:42.271629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-07-16 01:32:42.271725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.271739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.271890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.271902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.272002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.272015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.272088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.272101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.272190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.272203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.272288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.272300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.272382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.272396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.272484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.272496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.272647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.272660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.272732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.272745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.272813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.272825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.272894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.272907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.272999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.273011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.273152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.273164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.273241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.273253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.273406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.273419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.273522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.273535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.273608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.273620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.273694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.273706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.273788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.273801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.273873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.273885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.273967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.273980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.274118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.274131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.274209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.274221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.274315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.274328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.274402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.274414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.274483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.274495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.274577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.274590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.274728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-07-16 01:32:42.274740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-07-16 01:32:42.274818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.274831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.274910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.274923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.275003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.275015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.275086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.275097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.275175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.275188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.275253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.275265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.275371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.275385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.275589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.275602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.275672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.275684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.275846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.275858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.276009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.276022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.276158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.276175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.276251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.276264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.276416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.276429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.276563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.276575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.276639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.276651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.276728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.276741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.276894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.276906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.276991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.277004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.277144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.277157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.277228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.277240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.277319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.277332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.277496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.277510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.277570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.277581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.277663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.277676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.277759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.277771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.277849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.277862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.277941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.277954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.278092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.278105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.278254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.278266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.278345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.278357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.278506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.278519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.278609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.278622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.278703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.278716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.278805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.278817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.278888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.278900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.279034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.279047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.279114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.279125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-07-16 01:32:42.279268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-07-16 01:32:42.279281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.279370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.279384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.279633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.279646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.279791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.279804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.279950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.279963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.280092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.280105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.280182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.280194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.280262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.280273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.280415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.280428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.280505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.280518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.280608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.280620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.280705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.280718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.280789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.280801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.280935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.280950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.281034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.281047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.281124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.281136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.281293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.281306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.281444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.281456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.281593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.281607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.281694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.281706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.281785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.281797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.281936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.281948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.282024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.282037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.282109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.282121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.282199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.282211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.282362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.282375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.282509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.282522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.282676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.282688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.282781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.282792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.282935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.282948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.283101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.283113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.283172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.283184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.283261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.283273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.283472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.283487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.283632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.283645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.283723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.283735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.283831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.283844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.283979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-07-16 01:32:42.283991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-07-16 01:32:42.284127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.284140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.284277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.284289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.284446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.284459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.284549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.284561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.284641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.284653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.284745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.284757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.284891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.284904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.285067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.285080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.285153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.285167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.285268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.285280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.285506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.285519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.285600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.285612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.285750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.285763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.285858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.285870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.286006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.286020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.286155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.286170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.286245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.286257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.286342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.286354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.286505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.286519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.286600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.286612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.286709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.286721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.286796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.286808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.286964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.286977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.287072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.287084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.287170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.287183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.287253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.287265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.287418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.287432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.287572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.287585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.287648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.287660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.287733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.287746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.287903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.287915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.287987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.288000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.288138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.288150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.288220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.288233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.288298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.288310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.288397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.288410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.288487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.288499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.288700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-07-16 01:32:42.288712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-07-16 01:32:42.288779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.288790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.288923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.288935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.289083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.289096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.289175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.289188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.289324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.289342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.289421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.289434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.289539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.289553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.289624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.289636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.289707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.289718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.289801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.289815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.289982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.289994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.290083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.290096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.290245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.290258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.290412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.290425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.290504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.290517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.290725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.290737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.290818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.290837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.290912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.290927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.291081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.291093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.291162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.291174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.291249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.291261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.291330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.291351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.291417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.291430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.291512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.291525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.291621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.291634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.291720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.291732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.291835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.291848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.291914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.291927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.292070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.292082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.292167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.292179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.292258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.292271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.292368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.292381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.292465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.292477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.292566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.292578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.292663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.292676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.292739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.292751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.292887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.292900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.292977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.292989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.293057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.293069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.293234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.293247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-07-16 01:32:42.293464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-07-16 01:32:42.293476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.398 [2024-07-16 01:32:42.293558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-07-16 01:32:42.293571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-07-16 01:32:42.293759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-07-16 01:32:42.293772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-07-16 01:32:42.293856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-07-16 01:32:42.293869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-07-16 01:32:42.294012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-07-16 01:32:42.294025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-07-16 01:32:42.294092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-07-16 01:32:42.294104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.294190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.294203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.294296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.294309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.294514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.294527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.294662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.294675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.294757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.294769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.294834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.294847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.294984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.294997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.295081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.295093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.295173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.295186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.295343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.295360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.295565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.295578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.295714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.295729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.295865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.295878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.296033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.296046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.296195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.296207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.296290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.296302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.296369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.296384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.296548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.296561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.296645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.296657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.296813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.296826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.296969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.296981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.297191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.297204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.297359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.297372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.297442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.297455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.297552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.297565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.297708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.297721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.297804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.684 [2024-07-16 01:32:42.297818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.684 qpair failed and we were unable to recover it. 00:27:16.684 [2024-07-16 01:32:42.297921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.297934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.298018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.298032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.298106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.298119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.298200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.298212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.298440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.298453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.298596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.298608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.298748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.298761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.298839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.298852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.298931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.298944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.299015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.299027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.299236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.299250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.299406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.299418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.299588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.299600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.299692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.299705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.299784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.299796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.299987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.300000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.300082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.300095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.300358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.300373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.300451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.300464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.300530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.300542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.300694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.300706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.300857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.300871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.301052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.301064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.301155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.301168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.301314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.301331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.301541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.301554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.301649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.301662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.301891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.301904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.301983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.301996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.302079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.302093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.302174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.302187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.302319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.302332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.302432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.302445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.302582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.302595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.302659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.302671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.302823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.302836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.302929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.302941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.303155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.303168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.303253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.303266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.303377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.303389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.685 [2024-07-16 01:32:42.303455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.685 [2024-07-16 01:32:42.303467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.685 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.303546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.303558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.303623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.303636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.303709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.303721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.303796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.303809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.303876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.303889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.304090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.304102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.304190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.304203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.304334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.304356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.304421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.304433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.304503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.304515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.304744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.304778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.304876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.304893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.305125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.305142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.305224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.305239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.305319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.305341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.305446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.305464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.305621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.305635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.305710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.305722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.305874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.305888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.306036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.306048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.306189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.306201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.306446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.306459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.306599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.306611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.306684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.306698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.306834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.306847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.306921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.306934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.307082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.307094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.307175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.307188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.307348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.307360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.307434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.307446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.307579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.307591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.307729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.307742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.307808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.307821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.307903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.307916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.308068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.308081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.308162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.308175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.308241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.308254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.308413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.308428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.308566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.308579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.308714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.308727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.686 qpair failed and we were unable to recover it. 00:27:16.686 [2024-07-16 01:32:42.308812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.686 [2024-07-16 01:32:42.308824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.308982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.308995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.309132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.309145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.309215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.309227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.309381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.309394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.309474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.309487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.309636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.309648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.309737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.309750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.309927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.309940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.310086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.310099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.310192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.310225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.310414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.310444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.310542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.310561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.310723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.310739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.310830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.310860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.311106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.311123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.311300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.311316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.311486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.311502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.311603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.311620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.311708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.311727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.311921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.311939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.312038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.312054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.312211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.312227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.312380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.312401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.312560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.312576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.312676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.312692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.312829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.312845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.313014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.313031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.313123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.313139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.313242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.313254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.313357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.313371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.313510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.313523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.313675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.313687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.313757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.313770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.313829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.313840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.313932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.313944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.314020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.314033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.314180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.314193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.314346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.314359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.314437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.314449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.314536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.687 [2024-07-16 01:32:42.314549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.687 qpair failed and we were unable to recover it. 00:27:16.687 [2024-07-16 01:32:42.314620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.314632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.314770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.314783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.314916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.314928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.315085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.315097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.315231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.315241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.315379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.315390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.315475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.315486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.315624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.315635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.315731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.315741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.315842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.315871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.316029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.316048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.316147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.316164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.316387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.316401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.316494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.316504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.316607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.316618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.316702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.316712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.316787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.316797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.317009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.317020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.317097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.317109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.317179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.317190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.317329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.317348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.317422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.317433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.317582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.317596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.317758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.317769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.317846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.317857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.317948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.317958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.318043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.318054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.318125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.318135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.318221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.318232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.318317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.318328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.318412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.318424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.318497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.318508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.318675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.318686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.318857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.688 [2024-07-16 01:32:42.318868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.688 qpair failed and we were unable to recover it. 00:27:16.688 [2024-07-16 01:32:42.318935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.318945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.319027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.319039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.319128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.319138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.319224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.319234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.319436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.319448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.319648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.319660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.319720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.319731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.319795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.319806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.319947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.319961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.320045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.320057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.320128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.320141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.320288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.320301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.320449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.320463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.320546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.320559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.320641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.320654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.320747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.320762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.320837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.320849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.321031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.321045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.321248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.321259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.321483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.321503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.321608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.321624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.321712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.321727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.321900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.321916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.322146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.322162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.322246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.322262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.322442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.322456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.322537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.322549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.322637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.322649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.322795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.322807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.322940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.322952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.323041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.323053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.323140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.323152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.323300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.323312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.323385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.323398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.323476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.323488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.323639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.323651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.323732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.323744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.323915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.323927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.324004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.324016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.324162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.324174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.324316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.689 [2024-07-16 01:32:42.324328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.689 qpair failed and we were unable to recover it. 00:27:16.689 [2024-07-16 01:32:42.324423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.324436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.324512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.324524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.324607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.324619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.324695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.324708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.324861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.324873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.324960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.324973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.325108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.325121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.325193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.325206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.325284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.325296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.325366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.325379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.325543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.325556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.325704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.325717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.325800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.325813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.325909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.325921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.326069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.326084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.326256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.326269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.326346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.326358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.326443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.326467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.326535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.326548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.326634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.326647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.326715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.326728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.326938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.326951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.327046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.327059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.327143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.327156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.327365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.327378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.327455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.327481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.327636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.327649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.327729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.327742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.327894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.327907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.327973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.327997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.328094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.328107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.328185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.328197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.328298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.328311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.328386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.328400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.328483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.328495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.328569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.328581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.328738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.328751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.328829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.328842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.328933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.328946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.329022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.329035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.329112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.690 [2024-07-16 01:32:42.329124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.690 qpair failed and we were unable to recover it. 00:27:16.690 [2024-07-16 01:32:42.329258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.329271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.329426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.329440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.329593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.329606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.329681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.329694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.329841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.329853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.329949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.329962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.330046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.330058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.330228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.330241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.330381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.330394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.330558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.330571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.330650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.330662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.330802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.330816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.330977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.330991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.331217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.331231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.331382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.331394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.331497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.331510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.331711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.331724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.331799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.331812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.331945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.331957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.332042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.332056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.332144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.332157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.332308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.332322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.332467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.332481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.332558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.332571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.332651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.332664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.332799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.332811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.332950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.332963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.333193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.333205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.333296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.333308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.333444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.333457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.333619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.333632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.333804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.333817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.333903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.333915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.333997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.334011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.334094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.334107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.334242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.334255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.334340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.334353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.334448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.334461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.334603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.334616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.334707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.334719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.691 qpair failed and we were unable to recover it. 00:27:16.691 [2024-07-16 01:32:42.334792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.691 [2024-07-16 01:32:42.334804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.334886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.334898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.335118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.335131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.335268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.335281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.335484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.335498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.335566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.335578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.335709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.335721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.335859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.335871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.335953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.335965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.336056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.336069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.336214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.336227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.336312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.336325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.336407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.336427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.336663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.336682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.336828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.336844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.336996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.337012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.337094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.337110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.337212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.337228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.337391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.337404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.337553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.337565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.337768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.337780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.337983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.337995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.338072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.338084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.338219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.338231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.338391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.338404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.338488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.338500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.338564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.338575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.338666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.338678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.338754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.338766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.338862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.338875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.338959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.338971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.339132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.339144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.339259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.339272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.339364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.339376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.339579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.339592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.692 [2024-07-16 01:32:42.339667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.692 [2024-07-16 01:32:42.339680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.692 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.339760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.339773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.339865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.339878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.339954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.339967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.340049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.340061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.340156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.340187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.340284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.340305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.340482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.340499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.340651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.340667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.340752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.340768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.340863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.340879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.340971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.340987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.341073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.341089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.341245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.341262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.341344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.341358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.341440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.341452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.341549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.341562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.341628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.341641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.341712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.341725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.341804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.341817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.341892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.341905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.341995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.342008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.342095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.342107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.342252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.342265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.342349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.342364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.342433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.342445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.342583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.342596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.342755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.342767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.342848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.342861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.342997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.343009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.343089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.343102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.343242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.343255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.343326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.343359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.343426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.343439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.343534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.343547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.343642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.343655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.343716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.343728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.343905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.343918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.344054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.344067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.344197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.344210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.344433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.344446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.693 [2024-07-16 01:32:42.344600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.693 [2024-07-16 01:32:42.344612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.693 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.344682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.344695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.344765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.344777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.344925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.344937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.345086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.345101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.345234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.345246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.345312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.345324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.345487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.345524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.345614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.345633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.345723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.345742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.345850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.345865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.345941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.345954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.346022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.346034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.346174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.346187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.346364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.346379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.346561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.346575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.346651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.346664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.346726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.346739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.346815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.346828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.346907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.346919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.347061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.347073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.347141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.347154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.347269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.347281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.347419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.347432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.347576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.347589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.347725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.347738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.347830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.347843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.347942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.347954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.348024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.348037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.348199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.348212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.348310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.348323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.348474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.348487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.348621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.348634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.348717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.348729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.348815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.348828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.349006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.349018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.349200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.349213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.349408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.349421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.349495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.349507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.349642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.349655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.349740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.694 [2024-07-16 01:32:42.349752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.694 qpair failed and we were unable to recover it. 00:27:16.694 [2024-07-16 01:32:42.350002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.350014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.350198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.350211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.350288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.350300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.350439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.350455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.350609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.350622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.350702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.350714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.350811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.350824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.350964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.350976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.351122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.351134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.351258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.351270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.351479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.351492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.351569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.351582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.351788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.351800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.351965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.351977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.352166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.352178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.352332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.352349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.352503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.352516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.352671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.352684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.352849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.352862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.352941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.352954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.353055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.353067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.353212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.353224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.353442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.353455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.353544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.353557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.353761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.353773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.353843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.353856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.353931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.353944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.354080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.354093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.354251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.354263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.354408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.354421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.354492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.354504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.354661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.354674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.354820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.354833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.354971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.354984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.355076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.355088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.355242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.355255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.355348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.355362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.355451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.355464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.355645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.355658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.355756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.355768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.355860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-07-16 01:32:42.355873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-07-16 01:32:42.356052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.356065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.356133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.356146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.356324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.356343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.356423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.356437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.356584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.356596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.356746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.356758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.356837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.356850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.357078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.357091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.357248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.357260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.357461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.357474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.357561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.357573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.357648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.357661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.357759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.357773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.357908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.357921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.358062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.358093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.358329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.358368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.358591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.358622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.358805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.358817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.359004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.359034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.359208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.359240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.359380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.359412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.359586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.359617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.359862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.359893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.360141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.360171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.360361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.360401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.360594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.360624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.360762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.360774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.360855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.360867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.360960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.360990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.361139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.361171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.361361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.361393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.361598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.361629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.361854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.361885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.362120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.362132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.362306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.362346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.362527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.362558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.362677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.362707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.362873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.362886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.363031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.363062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.363177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.363207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.363403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-07-16 01:32:42.363434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-07-16 01:32:42.363616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.363647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.363781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.363817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.364020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.364050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.364294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.364324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.364593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.364631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.364737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.364758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.364847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.364863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.365018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.365034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.365254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.365284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.365538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.365570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.365745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.365775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.366025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.366041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.366284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.366313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.366583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.366615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.366889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.366919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.367117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.367148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.367269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.367299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.367442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.367475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.367670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.367701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.367815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.367845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.368121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.368151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.368276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.368306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.368506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.368544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.368726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.368757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.369005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.369037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.369252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.369263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.369348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.369360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.369459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.369472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.369732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.369769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.369980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.370010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.370141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.370169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.370248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.370258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.370330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.370352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.370582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.370593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.370681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.370691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-07-16 01:32:42.370859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-07-16 01:32:42.370871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.370955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.370966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.371060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.371071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.371220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.371251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.371400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.371431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.371562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.371592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.371728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.371740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.371893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.371905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.371996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.372006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.372237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.372268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.372398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.372429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.372534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.372564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.373559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.373585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.373777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.373790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.373874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.373884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.373958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.373968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.374136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.374167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.374382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.374414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.374630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.374662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.374912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.374945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.375130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.375161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.375294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.375326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.375468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.375499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.375696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.375727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.375852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.375883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.375992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.376023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.376209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.376240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.376360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.376392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.376578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.376610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.376730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.376761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.376977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.377008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.377123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.377135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.377212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.377223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.377328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.377387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.377587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.377618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.377824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.377855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.377964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.377975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.378107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.378119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.378203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.378214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.378290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.378300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.378363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.378374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-07-16 01:32:42.378538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-07-16 01:32:42.378549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.378689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.378700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.378840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.378851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.378918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.378928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.379007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.379017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.379093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.379103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.379843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.379865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.380048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.380081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.380356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.380388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.380630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.380661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.380806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.380838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.381012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.381023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.381240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.381271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.381404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.381441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.381633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.381665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.381925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.381937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.382084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.382096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.383117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.383136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.383393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.383405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.383560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.383571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.383738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.383750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.383840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.383851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.383930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.383940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.384019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.384029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.384162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.384195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.384388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.384420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.384599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.384631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.384903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.384934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.385123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.385154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.385424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.385460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.385591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.385621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.385826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.385856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.386002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.386039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.386234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.386265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.386406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.386439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.386632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.386644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.386780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.386792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.386870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.386880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.386970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.386980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.387042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.387052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.387154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.387184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-07-16 01:32:42.387386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-07-16 01:32:42.387419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.387556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.387587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.387703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.387734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.387861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.387872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.387949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.387960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.388096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.388107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.388219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.388230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.388310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.388320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.388553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.388585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.388772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.388803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.388977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.389008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.389123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.389154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.389280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.389311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.389521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.389557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.389739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.389770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.389958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.389989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.390090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.390120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.390296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.390326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.390517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.390585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.390887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.390923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.391041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.391073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.391258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.391289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.391442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.391477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.391606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.391637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.391835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.391866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.391987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.392019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.392153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.392184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.392380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.392413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.392600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.392631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.392764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.392795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.392984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.393014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.393259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.393295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.393444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.393476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.393680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.393711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.393887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.393918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.394023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.394034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.394108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.394120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.394206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.394216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.394351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.394384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.394569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.394601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.394807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.394837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-07-16 01:32:42.395029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-07-16 01:32:42.395060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.395166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.395197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.395317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.395362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.395614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.395645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.395770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.395801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.395977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.396008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.396118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.396151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.396352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.396392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.396579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.396611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.396786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.396798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.396887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.396896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.397099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.397130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.397320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.397428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.397674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.397705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.397824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.397855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.397970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.398001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.398181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.398211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.398433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.398477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.398625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.398657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.398836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.398866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.399045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.399076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.399180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.399196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.399284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.399299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.399531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.399544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.399639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.399649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.399732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.399742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.399891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.399903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.399968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.399978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.400063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.400073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.400159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.400169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.400302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.400314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.400378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.400390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.400485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.400496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.400642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.400653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.400852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.400863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.400922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-07-16 01:32:42.400931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-07-16 01:32:42.401076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.401087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.401177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.401186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.401308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.401347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.401464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.401494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.401670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.401700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.401887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.401922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.402004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.402014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.402174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.402185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.402340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.402352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.402423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.402432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.402521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.402530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.402672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.402683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.402951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.402962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.403047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.403058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.403138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.403148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.403211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.403235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.403421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.403453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.403573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.403603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.403734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.403764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.403890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.403901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.404063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.404095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.404321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.404386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.404530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.404563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.404690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.404706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.404874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.404905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.405080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.405111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.405313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.405355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.405562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.405593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.405773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.405804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.405978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.406008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.406186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.406202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.406398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.406416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.406573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.406604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.406782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.406813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.407037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.407075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.407272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.407287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.407447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.407463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.407570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.407615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.407743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.407773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.407910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.407941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-07-16 01:32:42.408082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-07-16 01:32:42.408112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.408342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.408358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.408461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.408476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.408664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.408694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.409723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.409750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.410010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.410048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.410238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.410270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.410520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.410552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.410803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.410834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.410936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.410947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.411084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.411096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.411192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.411221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.411351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.411383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.411507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.411538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.411677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.411708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.411827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.411858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.412042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.412080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.412236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.412247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.412320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.412330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.412477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.412510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.412636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.412666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.412822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.412858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.412986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.413018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.413134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.413165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.413322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.413334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.413417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.413427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.413512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.413522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.413680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.413690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.413835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.413846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.414624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.414647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.414736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.414747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.414826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.414837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.415045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.415076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.415250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.415282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.415477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.415511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.415635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.415667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.415856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.415887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.416087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.416118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.416296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.416327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.416476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.416510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.416650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.416680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-07-16 01:32:42.416792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-07-16 01:32:42.416823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.416941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.416972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.417085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.417115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.417243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.417273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.417403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.417437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.417560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.417592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.417704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.417745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.417922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.417938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.418037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.418067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.418242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.418272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.418446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.418477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.419526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.419554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.419787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.419804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.419982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.419998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.420104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.420134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.420352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.420383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.420510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.420541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.420685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.420717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.420832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.420863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.421087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.421118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.421234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.421272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.421453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.421485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.421611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.421642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.421763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.421793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.421923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.421952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.422141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.422172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.422357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.422387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.422508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.422538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.422723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.422754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.422925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.422942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.423103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.423134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.423251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.423281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.423473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.423505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.423748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.423779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.424052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.424083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.424275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.424305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.424547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.424578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.424713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.424744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.424919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.424949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.425070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.425100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.425302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.425333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-07-16 01:32:42.425477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-07-16 01:32:42.425508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.425631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.425663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.425840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.425871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.425985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.426016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.426295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.426325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.426457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.426488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.426628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.426659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.426774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.426804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.426979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.427010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.427138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.427168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.427296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.427326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.427530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.427561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.427752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.427782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.427918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.427948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.428059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.428090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.428197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.428226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.428346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.428378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.428574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.428605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.428725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.428755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.428927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.428963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.429077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.429100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.429246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.429262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.429482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.429514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.429726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.429756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.429893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.429909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.430007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.430022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.430178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.430194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.430335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.430356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.430454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.430469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.430544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.430559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.430662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.430676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.430766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.430781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.430853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.430867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.431002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.431013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.431110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.431141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.431317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-07-16 01:32:42.431356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-07-16 01:32:42.431570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.431601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.431779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.431811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.431986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.431997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.432060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.432070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.432228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.432259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.432389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.432424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.432538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.432569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.432768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.432799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.433002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.433014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.433218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.433249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.433434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.433467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.433606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.433637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.433746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.433777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.433892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.433903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.434054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.434065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.434137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.434147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.434218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.434229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.434368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.434380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.434512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.434524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.434736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.434767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.434877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.434908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.435024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.435055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.435179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.435218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.435300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.435312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.435499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.435531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.435720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.435750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.435865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.435896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.436145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.436176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.436297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.436327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.436547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.436581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.436759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.436790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.436983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.437014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.437177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.437188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.437271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.437281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.437424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.437436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.437515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.437525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.437606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.437616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.437808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.437820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.437963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.437974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.438182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.438213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.438332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-07-16 01:32:42.438374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-07-16 01:32:42.438561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.438592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.438778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.438809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.438944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.438975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.439092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.439104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.439234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.439247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.439376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.439388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.439532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.439563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.439752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.439783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.439923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.439953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.440162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.440173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.440323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.440371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.440486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.440517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.440694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.440725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.440841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.440872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.441007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.441038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.441216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.441247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.441369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.441400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.441515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.441552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.441661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.441692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.441872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.441902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.442161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.442172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.442306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.442317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.442468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.442504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.442634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.442664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.442854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.442885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.443029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.443059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.443244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.443275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.443408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.443439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.443615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.443646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.443838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.443869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.443993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.444023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.444142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.444173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.444298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.444328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.444620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.444653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.444859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.444889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.445011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.445039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.445176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.445187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.445256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.445266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.445363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.445374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.445456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.445466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.445558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-07-16 01:32:42.445568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-07-16 01:32:42.445766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.445777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.445850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.445860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.445999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.446010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.446096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.446107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.446303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.446314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.446405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.446416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.446496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.446506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.446650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.446662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.446752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.446763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.446844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.446854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.447044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.447074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.447197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.447226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.447405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.447437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.447573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.447604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.447782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.447812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.448022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.448052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.448155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.448166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.448299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.448310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.448372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.448385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.448455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.448466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.448575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.448585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.448654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.448666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.448831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.448842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.448918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.448928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.449003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.449013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.449210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.449241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.449429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.449460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.449588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.449618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.449726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.449757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.449883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.449913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.450028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.450058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.450240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.450251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.450396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.450427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.450663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.450694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.450807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.450836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.450947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.450959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.451120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.451131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.451262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.451292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.451430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.451461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.451588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.451618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-07-16 01:32:42.451734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-07-16 01:32:42.451765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.451944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.451973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.452072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.452083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.452162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.452172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.452245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.452255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.452348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.452361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.452443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.452453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.452539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.452550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.452690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.452701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.452778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.452789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.452851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.452861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.453012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.453043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.453162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.453192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.453383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.453416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.453609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.453641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.453823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.453855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.453961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.453992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.454106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.454136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.454294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.454305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.454514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.454525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.454664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.454676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.454865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.454901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.455088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.455119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.455292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.455323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.455513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.455543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.455677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.455707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.455904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.455915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.456002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.456012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.456154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.456185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.456305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.456335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.456470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.456502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.456638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.456670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.456855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.456887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.457075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.457105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.457286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.457317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.457514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.457545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-07-16 01:32:42.457794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-07-16 01:32:42.457825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.458014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.458046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.458168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.458179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.458310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.458321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.458395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.458405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.458545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.458556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.458700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.458712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.458854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.458865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.459005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.459035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.459209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.459240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.459424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.459457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.459751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.459782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.459976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.460007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.460205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.460248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.460405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.460417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.460556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.460567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.460661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.460672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.460755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.460765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.460834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.460844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.460990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.461023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.461147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.461177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.461305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.461364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.461486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.461518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.461727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.461757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.462009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.462020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.462084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.462095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.462232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.462243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.462411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.462423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.462561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.462592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.462721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.462751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.462937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.462968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.463092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.463104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.463270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.463280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.463365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.463376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.463587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.463619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.463737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.463767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.463886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.463917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.464039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.464075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.464135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.464145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.464285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.464296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.464378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.464390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-07-16 01:32:42.464554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-07-16 01:32:42.464566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.464715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.464748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.464936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.464967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.465180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.465211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.465365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.465377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.465445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.465455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.465607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.465618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.465766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.465797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.465915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.465959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.466136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.466147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.466296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.466326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.466479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.466524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.466638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.466669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.466800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.466831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.467089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.467105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.467242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.467259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.467438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.467454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.467603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.467620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.467771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.467787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.467930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.467946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.468089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.468105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.468207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.468238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.468363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.468395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.468586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.468617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.468727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.468764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.468962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.468993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.469180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.469196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.469392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.469423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.469642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.469674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.469866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.469906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.469983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.469997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.470200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.470230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.470429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.470461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.470587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.470617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.470860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.470891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.471095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.471111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.471285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.471317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.471461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.471493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.471601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.471632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.471751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.471781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.471890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.471906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-07-16 01:32:42.471995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-07-16 01:32:42.472011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.472180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.472195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.472269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.472283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.472442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.472459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.472533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.472547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.472702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.472718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.472891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.472908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.473014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.473045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.473170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.473201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.473312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.473351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.473572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.473641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.473840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.473874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.474119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.474149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.474333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.474361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.474449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.474494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.474611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.474641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.474837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.474868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.475113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.475144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.475325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.475366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.475498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.475529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.475770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.475800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.475978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.476009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.476196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.476225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.476412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.476452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.476638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.476669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.476854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.476891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.477064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.477095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.477228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.477258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.477443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.477475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.477647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.477682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.477817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.477847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.478036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.478067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.478232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.478247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.478391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.478405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.478542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.478553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.478700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.478730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.478918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.478950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.479076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.479107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.479399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.479410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.479551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.479562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.479711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.479742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-07-16 01:32:42.479874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-07-16 01:32:42.479904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.480102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.480134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.480263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.480293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.480432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.480465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.480649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.480681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.480857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.480888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.480996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.481025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.481207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.481237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.481424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.481456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.481719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.481790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.482023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.482058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.482236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.482268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.482476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.482495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.482604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.482621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.482702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.482716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.482982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.483016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.483196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.483228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.483357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.483394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.483639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.483669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.483984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.484015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.484245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.484256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.484413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.484449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.484564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.484599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.484732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.484761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.484939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.484970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.485156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.485187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.485377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.485408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.485542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.485573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.485701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.485733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.485862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.485892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.486161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.486192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.486385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.486397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.486484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.486521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-07-16 01:32:42.486651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-07-16 01:32:42.486681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.486799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.486829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.487028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.487058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.487141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.487151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.487299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.487330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.487461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.487493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.487673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.487703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.487881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.487912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.488022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.488053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.488285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.488316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.488535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.488575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.488818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.488856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.488978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.489009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.489273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.489305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.489570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.489605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.489777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.489808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.490017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.490053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.490197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.490229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.490404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.490416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.490481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.490491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.490628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.490639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.490716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.490726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.490886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.490916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.491163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.491194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.491374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.491405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-07-16 01:32:42.491530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-07-16 01:32:42.491561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.491842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.491873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.492064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.492095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.492208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.492238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.492347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.492363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.492522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.492554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.492812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.492843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.493012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.493023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.493108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.493118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.493310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.493353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.493551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.493582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.493704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.493735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.493922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.493953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.494144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.494175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.494389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.494422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.494602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.494633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.494874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.494906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.495028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.495040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.495118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.495128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.495216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.495226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.495434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.495466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.495600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.495631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.495818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.495849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.495970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.496001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.496178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.496210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.496399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.496434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.496615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.496647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.496792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.496824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.496938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.496969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.497146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.497177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.497302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.497333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.497424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.497434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.497606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.497649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-07-16 01:32:42.497830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-07-16 01:32:42.497860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.497972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.498003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.498260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.498292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.498574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.498606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.498783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.498814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.498934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.498965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.499078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.499090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.499298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.499328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.499458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.499492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.499684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.499716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.499892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.499923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.500033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.500044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.500118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.500128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.500363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.500405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.500532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.500563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.500691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.500722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.500860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.500891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.501081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.501112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.501230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.501241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.501455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.501488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.501704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.501734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.501928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.501959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.502230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.502261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.502448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.502480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.502669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.502699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.502907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.502938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.503120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.503151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.503313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.503324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.503554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.503586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.503777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.503807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-07-16 01:32:42.504013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-07-16 01:32:42.504044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.504306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.504317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.504472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.504484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.504557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.504568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.504765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.504776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.504857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.504867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.505019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.505030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.505240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.505271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.505457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.505495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.505679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.505710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.505830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.505860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.505987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.506018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.506206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.506236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.506509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.506540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.506651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.506682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.506957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.506988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.507171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.507202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.507333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.507375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.507570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.507601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.507798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.507829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.508005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.508017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.508101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.508111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.508263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.508274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.508423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.508458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.508645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.508676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.508942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.508974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.509097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.509108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.509169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.509179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.509244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.509275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.509452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.509485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.509594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.509624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.509888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.509919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.510162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.510206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.510349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.510361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.510583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.510594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-07-16 01:32:42.510677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-07-16 01:32:42.510687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.510831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.510862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.511050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.511081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.511322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.511363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.511614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.511644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.511768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.511799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.511974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.512005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.512295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.512326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.512605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.512639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.512815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.512846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.513032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.513062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.513184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.513215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.513483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.513515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.513619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.513656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.513851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.513882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.514070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.514101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.514293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.514324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.514517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.514548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.514765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.514796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.514923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.514954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.515133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.515164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.515345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.515389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.515527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.515539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.515701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.515732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.515919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.515950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.516144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.516174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.516401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.516413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.516494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.516504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.516564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.516574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.516656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.516685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.516805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.516836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.517128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-07-16 01:32:42.517160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-07-16 01:32:42.517422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.517433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.517566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.517577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.517662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.517672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.517884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.517915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.518118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.518149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.518333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.518374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.518494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.518526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.518734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.518764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.518959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.518990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.519181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.519223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.519473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.519504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.519745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.519776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.520016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.520047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.520205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.520216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.520426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.520462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.520603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.520634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.520877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.520908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.521152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.521184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.521297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.521327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.521547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.521579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.521820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.521851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.522127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.522165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.522307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.522318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.522508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.522539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.522821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.522852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.523028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.523058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-07-16 01:32:42.523192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-07-16 01:32:42.523223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.523467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.523499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.523676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.523707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.523843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.523874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.524063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.524094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.524273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.524304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.524448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.524482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.524656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.524668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.524746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.524756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.524904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.524915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.525079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.525110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.525360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.525391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.525584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.525615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.525815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.525846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.526140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.526171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.526303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.526335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.526535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.526566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.526790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.526821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.527068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.527099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.527237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.527268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.527390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.527423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.527534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.527564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.527750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.527781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.527974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.528005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.528182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.528214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.528402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.528414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.528560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.528591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.528717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.528747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.528870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.528902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.529079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.529110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.529294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.529324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.529476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.529507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.529698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.529729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.529951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-07-16 01:32:42.529982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-07-16 01:32:42.530099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.530130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.530271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.530308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.530564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.530635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.530883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.530952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.531163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.531200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.531299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.531330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.531466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.531497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.531757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.531787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.532053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.532084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.532272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.532303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.532535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.532547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.532632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.532642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.532793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.532805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.532974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.533003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.533176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.533206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.533405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.533437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.533649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.533681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.533853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.533883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.534154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.534185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.534369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.534380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.534481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.534512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.534700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.534731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.535002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.535039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.535266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.535277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.535360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.535371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.535547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.535578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.535762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.535794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.536038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.536069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.536186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.536196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.536328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.536346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-07-16 01:32:42.536444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-07-16 01:32:42.536455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.536569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.536580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.536737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.536765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.536899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.536930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.537196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.537226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.537403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.537416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.537507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.537517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.537741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.537752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.537967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.537997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.538102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.538133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.538303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.538333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.538478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.538491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.538623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.538634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.538867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.538878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.539023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.539053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.539177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.539208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.539475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.539507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.539637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.539667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.539935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.539966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.540177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.540207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.540395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.540406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.540541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.540552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.540707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.540738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.540949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.540980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.541222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.541260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.541416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.541428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.541595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.541626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.541867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.541898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.542115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.542146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.542319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.542360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.542565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.542576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.542638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.542648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.542736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.542746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.542891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.542902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-07-16 01:32:42.543091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-07-16 01:32:42.543122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.543264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.543295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.543481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.543513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.543692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.543722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.544014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.544046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.544250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.544281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.544502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.544533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.544723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.544755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.545057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.545088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.545200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.545231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.545453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.545502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.545703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.545714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.545852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.545863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.546070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.546102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.546285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.546316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.546462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.546493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.546745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.546756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.546952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.546965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.547056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.547066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.547192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.547203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.547281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.547319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.547521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.547552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.547749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.547779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.547907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.547937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.548142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.548174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.548448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.548458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.548588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.548621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.548803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.548834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-07-16 01:32:42.549077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-07-16 01:32:42.549108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.549347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.549361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.549447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.549484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.549685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.549716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.549968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.549999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.550103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.550114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.550244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.550254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.550404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.550415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.550645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.550676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.550890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.550921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.551106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.551136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.551335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.551356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.551493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.551504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.551583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.551593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.551762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.551799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.551998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.552029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.552179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.552210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.552324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.552365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.552503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.552514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.552700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.552730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.553003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.553034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.553223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.553254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.553453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.553488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.553741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.553752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.553843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.553853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.554020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.554052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.554250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.554281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.554429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.554461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.554632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.554663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-07-16 01:32:42.554796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-07-16 01:32:42.554833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.555023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.555054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.555320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.555366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.555509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.555541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.555732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.555763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.555890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.555920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.556033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.556064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.556198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.556236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.556385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.556397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.556481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.556517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.556714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.556744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.557014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.557045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.557160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.557171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.557260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.557270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.557414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.557446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.557622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.557653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.557895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.557926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.558112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.558143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.558263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.558275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.558420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.558431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.558655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.558666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.558810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.558821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.558894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.558903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-07-16 01:32:42.559070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-07-16 01:32:42.559080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.559235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.559266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.559463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.559499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.559683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.559714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.559961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.560032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.560167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.560185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.560415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.560451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.560723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.560754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.561038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.561069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.561257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.561287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.561557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.561589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.561806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.561837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.561973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.562003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.562193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.562224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.562464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.562495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.562700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.562730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.562907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.562937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.563117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.563133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.563279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.563296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.563512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.563526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.563728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.563739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.563881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.563893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.563970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.563979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.564183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.564213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.564328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.564372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.564562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.564593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.564775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.564806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.564980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.565011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.565311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.565351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-07-16 01:32:42.565547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-07-16 01:32:42.565558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.565664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.565696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.565990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.566021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.566286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.566317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.566525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.566557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.566749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.566780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.566917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.566948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.567136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.567148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.567381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.567416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.567526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.567557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.567731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.567762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.568002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.568033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.568275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.568306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.568456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.568468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.568682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.568712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.568901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.568937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.569077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.569107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.569240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.569270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.569451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.569482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.569607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.569638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.569828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.569859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.570036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.570067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.570245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.570257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.570330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.570343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.570570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.570601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.570801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.570833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.570953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.570984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.571172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.571203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-07-16 01:32:42.571392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-07-16 01:32:42.571427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.571560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.571591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.571768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.571799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.571996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.572026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.572276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.572307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.572455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.572466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.572596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.572606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.572750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.572781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.573026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.573057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.573254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.573295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.573433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.573445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.573609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.573639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.573846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.573877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.574048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.574078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.574257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.574289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.574561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.574573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.574672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.574681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.574877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.574908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.575096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.575127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.575238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.575269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.575514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.575549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.575742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.575773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.575876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.575905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.576099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.576129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.576398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.576430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.576595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.576606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.576809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.576840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.577107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.577144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.577326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.577367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.577553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.577585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-07-16 01:32:42.577775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-07-16 01:32:42.577806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.577991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.578022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.578269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.578301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.578450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.578462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.578695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.578726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.578988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.579020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.579156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.579187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.579293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.579330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.579419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.579430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.579563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.579575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.579804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.579816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.579968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.580001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.580217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.580250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.580448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.580480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.580688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.580720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.580824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.580856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.581099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.581130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.581242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.581274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.581475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.581486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.581571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.581581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.581783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.581814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.582082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.582112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.582301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.582332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.582558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.582589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.582817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.582891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.583176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.583212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.583455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.583473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.583564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.583580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.583722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.583738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.583889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.583906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.584141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.584157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.584262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.584277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.584519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.584552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.584839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.584870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.584998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.585029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.585209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.585239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.585392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.585426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.585598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.585637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.585838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-07-16 01:32:42.585869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-07-16 01:32:42.586056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.586087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.586262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.586292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.586424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.586456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.586646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.586676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.586849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.586879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.587064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.587094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.587284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.587315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.587568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.587585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.587672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.587687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.587836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.587850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.588053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.588084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.588206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.588236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.588377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.588410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.588627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.588658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.588829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.588860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.588989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.589020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.589207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.589238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.589375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.589407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.589664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.589695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.589828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.589860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.590050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.590082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.590321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.590332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.590536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.590568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.590747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.590778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.590954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.590985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.591179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.591210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.591398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.591433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.591552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.591593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.591722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.591733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.591908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.591939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.592053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.592084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.592270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.592301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.592432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.592469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.592545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.592555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.592631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.592641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.592719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.592729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.592876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.592887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.592984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-07-16 01:32:42.592993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-07-16 01:32:42.593193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.593206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.593419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.593451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.593694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.593725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.593853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.593884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.594145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.594177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.594445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.594476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.594710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.594721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.594935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.594967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.595210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.595221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.595462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.595475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.595622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.595633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.595836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.595867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.596081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.596111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.596374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.596405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.596602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.596633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.596823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.596854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.597042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.597073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.597266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.597297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.597498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.597529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.597707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.597739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.597926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.597957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.598075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.598106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.598297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.598328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.598544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.598575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.598764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.598795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.599057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.599088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.599275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.599306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.599582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.599652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.599786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.599820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.600017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.600047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.600253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.600283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.600483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.600515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.600759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.600790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-07-16 01:32:42.601030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-07-16 01:32:42.601062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.601241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.601271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.601535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.601551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.601638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.601653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.601745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.601760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.601941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.601957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.602061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.602095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.602242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.602278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.602458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.602489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.602699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.602710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.602870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.602901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.603103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.603134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.603399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.603411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.603646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.603677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.603786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.603815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.603999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.604030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.604217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.604249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.604488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.604500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.604669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.604680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.604771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.604801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.604998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.605028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.605202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.605233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.605421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.605433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.605593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.605624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.605763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.605794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.605993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.606025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.606265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.606296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.606426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.606458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.606639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.606650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.606823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.606854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.606979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.607010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.607201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.607233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.607489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.607502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.607606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.607638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.607781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.607819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.607949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.607981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.608163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.608196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.608317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.608333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.608425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.608440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.608627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.608658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.608922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-07-16 01:32:42.608953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-07-16 01:32:42.609141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.609171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.609379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.609411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.609599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.609630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.609874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.609906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.610092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.610122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.610248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.610264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.610472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.610493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.610648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.610664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.610871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.610887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.611115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.611146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.611366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.611397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.611589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.611621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.611854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.611884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.612147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.612178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.612306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.612347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.612457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.612489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.612674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.612705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.612813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.612843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.612952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.612983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.613201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.613232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.613445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.613477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.613720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.613751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.613893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.613923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.614062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.614093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.614285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.614315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.614506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.614522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.614699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.614730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.614931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.614962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.615087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.615117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.615298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.615329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.615465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.615496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.615622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.615653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.615792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.615822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.616188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.616259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.616493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.616532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.616616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.616629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.616716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.616726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.616871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.616903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.617148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.617179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.617416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.617448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-07-16 01:32:42.617647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-07-16 01:32:42.617678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.617866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.617897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.618026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.618057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.618303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.618334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.618523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.618554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.618695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.618726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.618907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.618943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.619047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.619076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.619273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.619304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.619558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.619571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.619786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.619817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.620063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.620094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.620228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.620259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.620429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.620441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.620619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.620649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.620831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.620862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.621063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.621093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.621325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.621339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.621431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.621441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.621531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.621542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.621767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.621798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.621986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.622017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.622194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.622225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.622398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.622410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.622618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.622649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.622785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.622816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.622944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.622975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.623160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.623191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.623368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.623414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.623604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.623615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.623682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.623692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.623892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.623904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.624105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.624136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.624326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.624371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.624643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.624674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.624874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.624905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.625139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.625170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.625296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.625307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.625407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.625418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.625483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.625523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.625780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.625811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-07-16 01:32:42.625958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-07-16 01:32:42.625989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.626200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.626231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.626440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.626473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.626760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.626791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.626966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.626997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.627124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.627154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.627342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.627383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.627446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.627457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.627725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.627756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.627939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.627970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.628110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.628141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.628251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.628282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.628528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.628540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.628697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.628728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.628857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.628887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.629075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.629106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.629294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.629326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.629518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.629549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.629775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.629806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.630052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.630084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.630197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.630228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.630414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.630446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.630631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.630662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.630868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.630899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.631093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.631124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.631381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.631416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.631652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.631664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.631806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.631837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.632078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.632109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.632299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.632330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.632447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.632478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.632579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.632611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.632721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.632733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.632867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-07-16 01:32:42.632878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-07-16 01:32:42.633079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.633090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.633319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.633330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.633544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.633575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.633770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.633801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.633935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.633966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.634097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.634128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.634305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.634345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.634538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.634570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.634692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.634703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.634795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.634804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.634890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.634920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.635105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.635137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.635268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.635300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.635494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.635506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.635650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.635660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.635797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.635828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.636085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.636117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.636252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.636283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.636424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.636456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.636594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.636605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.636764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.636805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.636990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.637021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.637287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.637328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.637490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.637502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.637637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.637668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.637865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.637897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.638085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.638116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.638255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.638286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.638437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.638448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.638528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.638538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.638664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.638675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.638771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.638781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.638919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.638949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.639076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.639107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.639228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.639258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.639380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.639442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.639723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.639735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.639798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.639808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.639965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.639978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.640053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.640063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.640233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-07-16 01:32:42.640272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-07-16 01:32:42.640382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.640420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.640619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.640631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.640799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.640810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.640903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.640931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.641129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.641160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.641349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.641381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.641597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.641628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.641756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.641787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.641908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.641920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.641995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.642006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.642273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.642284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.642446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.642478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.642662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.642692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.642985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.643016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.643195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.643226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.643408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.643421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.643571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.643582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-07-16 01:32:42.643712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-07-16 01:32:42.643723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.643919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.643930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.643999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.644010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.644228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.644239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.644325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.644334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.644517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.644528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.644620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.644630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.644698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.644708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.644772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.644783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.644929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.644939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.645031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.645041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.645271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.645283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.645431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.645442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.645535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.645545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.645691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.645703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.645852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.645863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.646047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.646059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.646137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.646147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.646395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.646407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.646554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.646565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.646659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.646672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.646849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.646860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.647006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.647017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.647102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.647112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.647262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.647273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.647422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.647435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.647507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.647517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.647611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.022 [2024-07-16 01:32:42.647621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.022 qpair failed and we were unable to recover it. 00:27:17.022 [2024-07-16 01:32:42.647819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.647830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.647896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.647907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.648055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.648066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.648144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.648154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.648285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.648296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.648383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.648393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.648467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.648478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.648577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.648587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.648669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.648680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.648741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.648752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.648829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.648839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.648914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.648924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.648994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.649004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.649078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.649088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.649235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.649245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.649322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.649332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.649474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.649486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.649621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.649632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.649718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.649728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.649800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.649810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.649948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.649959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.650162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.650193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.650312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.650349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.650482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.650513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.650688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.650719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.650860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.650890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.650994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.651026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.651233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.651265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.651414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.651427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.651589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.651600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.651809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.651841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.652039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.652070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.652277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.652314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.652521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.652553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.652741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.652773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.653016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.653047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.653234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.653265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.653459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.653491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.653732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.653743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.653834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-07-16 01:32:42.653863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-07-16 01:32:42.653998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.654028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.654294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.654326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.654523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.654534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.654607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.654617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.654769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.654781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.654929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.654960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.655208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.655240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.655376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.655411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.655651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.655662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.655752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.655763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.655832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.655842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.656053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.656084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.656211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.656243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.656458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.656486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.656623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.656634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.656836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.656847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.656941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.656952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.657059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.657090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.657280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.657311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.657578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.657610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.657722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.657753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.657859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.657889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.658023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.658054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.658164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.658195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.658437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.658469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.658614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.658625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.658710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.658719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.658917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.658928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.659015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.659026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.659098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.659107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.659199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.659230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.659434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.659470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.659598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.659634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.659892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.659903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.660107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.660138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.660255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.660286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.660486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.660518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.660712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.660744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.660890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.660901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.661052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.661063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-07-16 01:32:42.661139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-07-16 01:32:42.661149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.661255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.661286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.661419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.661451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.661695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.661725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.661915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.661926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.662133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.662144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.662293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.662304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.662509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.662540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.662727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.662757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.662969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.662999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.663176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.663208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.663397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.663431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.663625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.663656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.663898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.663929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.664100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.664131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.664323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.664379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.664566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.664598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.664761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.664772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.664930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.664960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.665073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.665104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.665291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.665322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.665467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.665498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.665741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.665752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.665836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.665845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.666005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.666047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.666229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.666260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.666373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.666385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.666467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.666478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.666604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.666614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.666705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.666716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.666843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.666854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.666930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.666940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.667082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.667095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.667246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.667257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.667334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.667354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.667511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.667542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.667766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.667797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.668077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.668108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.668283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.668314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.668465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.668477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.668552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.668562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-07-16 01:32:42.668774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-07-16 01:32:42.668804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.668989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.669020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.669214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.669245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.669358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.669391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.669568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.669599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.669784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.669814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.670005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.670036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.670146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.670177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.670396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.670428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.670643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.670673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.670864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.670894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.671100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.671130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.671320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.671366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.671553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.671585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.671808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.671820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.671903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.671913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.672119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.672149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.672383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.672415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.672664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.672696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.672870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.672901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.673102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.673133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.673332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.673373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.673528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.673559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.673693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.673723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.673922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.673933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.674094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.674125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.674402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.674434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.674563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.674594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.674825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.674836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.675051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.675062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.675154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.675164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.675382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.675422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.675628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.675659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-07-16 01:32:42.675793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-07-16 01:32:42.675825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3543699 Killed "${NVMF_APP[@]}" "$@" 00:27:17.027 [2024-07-16 01:32:42.676122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.676153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.676259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.676290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.676584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.676596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:17.027 [2024-07-16 01:32:42.676747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.676758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:17.027 [2024-07-16 01:32:42.676891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.676903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.027 [2024-07-16 01:32:42.677048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.677059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:17.027 [2024-07-16 01:32:42.677194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.027 [2024-07-16 01:32:42.677206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.677411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.677422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.677588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.677601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.677750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.677761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.677905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.677916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.678059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.678070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.678152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.678162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.678296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.678308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.678384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.678395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.678531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.678543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.678684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.678695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.678826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.678837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.678970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.678981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.679047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.679057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.679210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.679220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.679372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.679384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.679456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.679466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.679599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.679610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.679690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.679700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.679848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.679859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.680058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.680069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.680302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.680314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.680462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.680474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.680571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.680581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.680675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.680685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.680764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.680774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.680881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.680893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.680964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.680974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.681041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.681051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.681198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.681209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.681432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.681444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.681695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.681706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-07-16 01:32:42.681909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-07-16 01:32:42.681931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.682026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.682037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.682172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.682181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.682348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.682358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.682512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.682522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3544627 00:27:17.028 [2024-07-16 01:32:42.682665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.682675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3544627 00:27:17.028 [2024-07-16 01:32:42.682816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.682827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3544627 ']' 00:27:17.028 [2024-07-16 01:32:42.682972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.682983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.028 [2024-07-16 01:32:42.683141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.683152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.028 [2024-07-16 01:32:42.683240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.683251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.683347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.683361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.028 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:17.028 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.028 [2024-07-16 01:32:42.683585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.683597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 01:32:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.028 [2024-07-16 01:32:42.683706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.683717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.683848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.683859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.683943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.683953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.684020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.684031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.684184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.684196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.684296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.684306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.684470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.684481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.684560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.684571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.684652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.684662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.684739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.684750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.684814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.684824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.684906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.684916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.685045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.685056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.685111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.685121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.685261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.685271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.685411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.685422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.685511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.685521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.685660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.685671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.685748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.685758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.685856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.685866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.686053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.686066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.686198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.686208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.686357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.686368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.686504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.686515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.686655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-07-16 01:32:42.686667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-07-16 01:32:42.686741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.686752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.686849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.686867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.687004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.687016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.687240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.687252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.687325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.687340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.687427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.687437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.687575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.687585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.687672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.687683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.687748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.687759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.687903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.687913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.688057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.688068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.688231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.688242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.688409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.688421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.688491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.688501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.688592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.688603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.688682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.688693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.688768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.688779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.688848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.688858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.688942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.688953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.689018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.689028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.689154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.689165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.689255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.689266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.689347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.689360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.689562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.689574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.689652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.689663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.689730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.689740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.689825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.689837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.689966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.689977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.690124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.690135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.690254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.690264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.690340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.690350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.690437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.690447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.690518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.690528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.690594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.690604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.690689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.690700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.690769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.690779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.690849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.690859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.691013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.691024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.691096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.691106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.691193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-07-16 01:32:42.691204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-07-16 01:32:42.691284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.691295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.691363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.691374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.691431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.691442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.691579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.691592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.691667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.691679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.691759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.691770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.691839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.691850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.691916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.691926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.692061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.692072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.692207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.692219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.692291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.692302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.692447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.692460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.692620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.692631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.692697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.692707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.692781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.692791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.692926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.692937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.693066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.693077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.693154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.693164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.693311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.693322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.693401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.693413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.693488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.693504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.693663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.693674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.693759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.693772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.693938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.693949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.694029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.694040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.694181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.694193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.694354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.694365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.694527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.694539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.694628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.694639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.694772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.694783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.694861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.694872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.694944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.694955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.695124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.695135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.695196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.695206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.695272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-07-16 01:32:42.695283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-07-16 01:32:42.695421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.695432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.695644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.695655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.695744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.695755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.695906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.695917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.696066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.696077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.696145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.696155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.696323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.696334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.696427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.696440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.696518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.696529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.696605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.696616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.696703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.696714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.696851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.696863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.696945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.696955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.697082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.697093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.697224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.697235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.697315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.697325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.697399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.697410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.697490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.697501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.697578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.697589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.697662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.697671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.697736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.697746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.697829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.697839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.697910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.697920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.698049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.698060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.698136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.698146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.698212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.698223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.698289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.698298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.698377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.698390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.698454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.698466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.698557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.698568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.698653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.698663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.698798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.698809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.698939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.698950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.699017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.699027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.699182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.699193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.699281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.699293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.699359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.699369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.699426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.699436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.699520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.699530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.699672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.699683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.699775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-07-16 01:32:42.699787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-07-16 01:32:42.699930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.699941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.700209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.700220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.700301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.700312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.700469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.700480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.700643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.700653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.700740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.700751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.700847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.700858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.700937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.700948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.701168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.701179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.701395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.701407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.701605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.701616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.701681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.701691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.701766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.701777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.701834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.701844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.701926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.701938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.702021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.702032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.702129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.702140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.702272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.702283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.702363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.702374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.702536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.702548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.702679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.702691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.702843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.702854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.702986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.702997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.703068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.703079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.703152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.703163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.703233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.703244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.703319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.703332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.703429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.703440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.703582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.703593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.703725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.703736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.703867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.703878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.703961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.703972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.704031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.704041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.704116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.704128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.704264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.704275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.704373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.704385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.704486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.704498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.704645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.704657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.704797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.704809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.704888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.704899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-07-16 01:32:42.705034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-07-16 01:32:42.705046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.705120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.705132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.705267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.705278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.705492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.705506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.705658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.705669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.705762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.705774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.705844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.705855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.705933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.705944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.706033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.706045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.706178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.706189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.706275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.706286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.706368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.706380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.706453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.706465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.706548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.706559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.706693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.706704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.706909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.706921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.707072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.707084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.707170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.707181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.707278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.707289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.707435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.707446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.707583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.707595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.707725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.707737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.707866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.707877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.708020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.708032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.708109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.708121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.708252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.708264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.708398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.708412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.708481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.708492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.708570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.708580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.708733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.708744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.708840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.708851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.708980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.708991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.709049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.709059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.709151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.709162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.709241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.709252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.709387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.709399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.709482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.709493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.709636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.709647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.709719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.709730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.709811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.709822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.709888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.709899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-07-16 01:32:42.709986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-07-16 01:32:42.709997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.710152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.710163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.710238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.710249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.710315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.710326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.710477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.710488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.710562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.710573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.710724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.710735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.710809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.710820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.710902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.710913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.711044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.711055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.711220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.711231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.711298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.711309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.711459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.711471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.711539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.711549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.711610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.711620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.711684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.711693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.711823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.711834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.711915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.711926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.711994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.712004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.712186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.712197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.712275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.712286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.712376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.712387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.712467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.712478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.712549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.712559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.712701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.712712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.712795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.712808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.712958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.712969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.713101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.713112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.713170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.713180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.713252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.713262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.713463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.713475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.713562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.713573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.713707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.713718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.713791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.713802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.713888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.713899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.714041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.714052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.714137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.714148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.714228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.714238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.714377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.714388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.714469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.714480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.714612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.714623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-07-16 01:32:42.714701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-07-16 01:32:42.714712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.714844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.714854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.714936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.714947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.715011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.715022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.715097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.715108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.715193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.715204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.715288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.715299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.715378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.715389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.715465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.715476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.715547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.715559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.715644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.715655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.715800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.715811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.715889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.715900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.716034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.716044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.716173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.716184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.716253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.716263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.716398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.716409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.716472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.716482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.716543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.716553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.716628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.716639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.716707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.716717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.716848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.716860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.716925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.716935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.717010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.717020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.717112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.717125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.717259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.717271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.717403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.717415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.717480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.717491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.717554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.717565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.717645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.717656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.717785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.717796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.717881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.717892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.717969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.717980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.718041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.718051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.718186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.718197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-07-16 01:32:42.718278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-07-16 01:32:42.718289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.718356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.718366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.718497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.718508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.718573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.718585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.718670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.718681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.718752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.718763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.718845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.718856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.719007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.719018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.719084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.719094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.719159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.719170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.719238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.719249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.719345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.719357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.719504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.719515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.719599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.719610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.719757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.719768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.719855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.719866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.720023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.720034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.720099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.720110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.720245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.720257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.720332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.720347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.720543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.720554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.720625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.720636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.720727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.720738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.720840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.720851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.720983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.720994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.721136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.721147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.721232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.721243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.721391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.721403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.721476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.721487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.721561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.721574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.721654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.721665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.721794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.721806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.721898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.721909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.722106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.722117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.722199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.722210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.722363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.722375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.722508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.722519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.722585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.722595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.722737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.722748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.722824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.722835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.722981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-07-16 01:32:42.722992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-07-16 01:32:42.723133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.723144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.723309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.723320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.723485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.723496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.723550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.723560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.723641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.723652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.723797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.723809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.723956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.723967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.724052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.724063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.724159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.724170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.724239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.724250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.724402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.724413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.724498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.724510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.724627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.724637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.724714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.724725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.724799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.724809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.724909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.724920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.724988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.724999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.725065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.725075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.725187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.725198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.725280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.725290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.725369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.725380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.725525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.725536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.725599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.725610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.725680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.725692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.725829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.725840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.725914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.725925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.725995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.726006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.726077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.726088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.726163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.726176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.726319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.726329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.726519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.726530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.726610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.726621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.726706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.726717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.726939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.726951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.727048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.727059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.727121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.727133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.727263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.727274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.727353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.727365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.727442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.727453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.727516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.727528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-07-16 01:32:42.727603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-07-16 01:32:42.727614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.727674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.727686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.727838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.727849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.727927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.727937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.728015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.728025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.728224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.728234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.728380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.728392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.728484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.728496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.728723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.728734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.728821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.728832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.728918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.728929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.729080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.729092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.729219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.729230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.729379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.729391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.729479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.729490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.729558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.729570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.729712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.729722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.729793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.729805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.729870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.729881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.729938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.729947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.730085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.730096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.730180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.730192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.730274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.730285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.730433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.730444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.730515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.730526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.730655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.730666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.730749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.730760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.730852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.730863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.731017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.731030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.731203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.731214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.731289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.731300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.731440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.731451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.731540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.731551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.731636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.731648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.731727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.731738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.731817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.731829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.731908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.731920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.732020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.732031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.732166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.732177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.732243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.732254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.732327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.732341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-07-16 01:32:42.732478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-07-16 01:32:42.732489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.732569] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:27:17.039 [2024-07-16 01:32:42.732612] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.039 [2024-07-16 01:32:42.732644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.732654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.732728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.732738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.732918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.732927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.732997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.733007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.733206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.733217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.733292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.733302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.733368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.733379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.733463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.733474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.733621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.733632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.733794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.733805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.733937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.733948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.734023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.734034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.734117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.734128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.734303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.734314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.734471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.734482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.734555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.734566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.734711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.734723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.734786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.734797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.734954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.734965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.735026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.735036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.735188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.735200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.735355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.735366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.735497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.735507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.735575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.735586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.735650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.735661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.735733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.735746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.735880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.735891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.736023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.736034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.736184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.736194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.736274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.736285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.736464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.736475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.736555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.736565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.736634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.736645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-07-16 01:32:42.736715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-07-16 01:32:42.736725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.736818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.736828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.736974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.736985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.737051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.737062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.737193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.737204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.737278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.737289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.737375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.737388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.737466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.737477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.737562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.737572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.737701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.737713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.737778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.737788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.737934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.737945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.738011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.738021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.738087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.738097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.738172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.738182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.738266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.738278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.738413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.738425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.738553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.738564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.738694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.738704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.738790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.738801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.738939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.738949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.739131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.739142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.739279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.739290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.739437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.739447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.739531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.739542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.739620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.739630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.739793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.739804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.739870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.739881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.740012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.740023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.740101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.740111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.740176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.740187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.740353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.740364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.740454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.740466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.740613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.740624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.740713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.740724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.740900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.740911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.741084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.741095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.741242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.741253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.741348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.741361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.741504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.741515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.741588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-07-16 01:32:42.741598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-07-16 01:32:42.741735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.741745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.741916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.741927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.742087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.742098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.742297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.742308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.742454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.742465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.742538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.742549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.742622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.742633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.742783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.742794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.742875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.742886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.743041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.743052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.743228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.743239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.743319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.743330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.743436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.743447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.743582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.743593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.743661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.743672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.743752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.743763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.743842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.743853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.744004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.744014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.744080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.744091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.744165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.744176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.744307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.744318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.744481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.744492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.744572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.744583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.744725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.744736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.744933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.744944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.745023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.745034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.745123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.745134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.745263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.745274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.745415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.745428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.745578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.745590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.745766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.745777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.745911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.745923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.746004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.746014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.746081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.746092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.746234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.746245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.746316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.746328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.746421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.746432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.746506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.746517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.746593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.746603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.746753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.746764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.746834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-07-16 01:32:42.746845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-07-16 01:32:42.746938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.746949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.747085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.747096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.747240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.747251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.747393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.747404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.747505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.747517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.747599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.747610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.747748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.747759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.747898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.747909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.747974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.747984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.748125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.748135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.748271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.748283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.748415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.748426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.748509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.748519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.748656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.748666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.748728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.748739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.748824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.748835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.748899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.748910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.749057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.749069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.749212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.749222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.749305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.749316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.749454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.749465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.749542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.749552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.749644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.749655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.749736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.749747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.749839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.749850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.749978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.749989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.750122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.750134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.750216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.750227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.750366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.750377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.750506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.750517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.750663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.750676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.750756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.750767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.750897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.750908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.750975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.750985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.751116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.751127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.751260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.751271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.751427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.751439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.751583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.751594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.751678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.751690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.751841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.751852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.751935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-07-16 01:32:42.751946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-07-16 01:32:42.752028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.752039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.752111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.752121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.752210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.752222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.752311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.752322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.752426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.752438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.752510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.752521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.752584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.752595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.752675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.752686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.752754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.752765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.752908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.752919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.753064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.753075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.753144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.753154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.753221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.753232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.753346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.753358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.753454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.753466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.753547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.753558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.753611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.753621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.753751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.753762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.753896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.753907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.754049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.754060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.754262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.754273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.754472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.754483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.754652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.754663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.754739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.754750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.754945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.754956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.755084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.755095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.755150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.755159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.755293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.755304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.755374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.755386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.755469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.755482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.755624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.755634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.755716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.755727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.755870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.755881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.755978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.755990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.756116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.756127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.756359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-07-16 01:32:42.756371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-07-16 01:32:42.756452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.756463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.756624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.756635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.756825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.756836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.756912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.756923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.757024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.757035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.757186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.757197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.757343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.757354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.757505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.757516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.757594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.757605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.757737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.757747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.757942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.757953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.758095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.758106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.758196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.758207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.758334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.758349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.758429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.758440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.758576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.758587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.758656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.758668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.758749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.758759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.044 [2024-07-16 01:32:42.758840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.758852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.758915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.758925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.759017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.759028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.759091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.759102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.759259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.759271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.759415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.759427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.759524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.759535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.759730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.759740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.759887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.759898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.760046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.760057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.760133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.760144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.760212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.760223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.760305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.760316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.760461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.760473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.760619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.760630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.760701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.760714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.760780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.760792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.760866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.760878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.761025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.761036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.761168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.761179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.761236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.761246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.761407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.761419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-07-16 01:32:42.761591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-07-16 01:32:42.761602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.761689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.761700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.761783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.761794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.761878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.761889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.762037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.762048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.762134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.762146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.762216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.762226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.762359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.762371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.762461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.762472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.762607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.762618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.762767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.762778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.762928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.762940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.763071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.763082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.763228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.763239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.763307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.763319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.763435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.763448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.763543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.763554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.763639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.763650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.763717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.763728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.763930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.763941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.764083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.764094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.764196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.764207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.764346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.764357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.764562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.764573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.764659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.764670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.764805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.764816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.764975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.764985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.765074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.765085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.765171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.765182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.765331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.765347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.765509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.765520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.765655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.765666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.765745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.765755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.765902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.765914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.765992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.766003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.766147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.766159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.766294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.766305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.766431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.766443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.766524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.766535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.766606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-07-16 01:32:42.766617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-07-16 01:32:42.766686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.766697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.766966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.766977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.767057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.767068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.767137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.767148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.767217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.767227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.767296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.767306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.767386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.767398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.767469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.767480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.767680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.767691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.767768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.767779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.767874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.767885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.767972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.767982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.768123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.768134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.768208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.768219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.768298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.768309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.768456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.768467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.768539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.768550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.768632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.768643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.768791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.768802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.768890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.768901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.768997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.769983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.769994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.770137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.770151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.770228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.770239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.770308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.770319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.770392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.770403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.770464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.770473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.770554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.770565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.770627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.770636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-07-16 01:32:42.770703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-07-16 01:32:42.770713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.770786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.770796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.770939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.770950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.771099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.771110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.771187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.771199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.771278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.771289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.771424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.771436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.771496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.771505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.771657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.771668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.771729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.771738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.771907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.771918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.771992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.772004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.772097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.772108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.772188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.772199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.772271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.772282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.772416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.772427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.772492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.772502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.772637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.772648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.772706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.772716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.772801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.772812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.772901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.772912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.773006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.773017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.773095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.773106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.773185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.773196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.773264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.773274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.773341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.773351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.773418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.773429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.773568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.773579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.773639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.773649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.773801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.773812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.773874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.773884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.773965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.773975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.774152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.774162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.774224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.774235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.774309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.774320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.774402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.774414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.774546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.774556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.774718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.774729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.774800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.774811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-07-16 01:32:42.774943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-07-16 01:32:42.774954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.775153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.775164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.775297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.775307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.775447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.775459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.775547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.775558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.775618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.775629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.775720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.775730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.775804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.775814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.775954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.775966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.776110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.776121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.776199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.776211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.776355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.776367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.776564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.776574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.776720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.776731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.776799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.776809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.776974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.776985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.777050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.777061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.777189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.777200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.777327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.777341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.777423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.777434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.777585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.777596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.777741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.777752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.777831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.777842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.777974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.777985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.778134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.778145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.778212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.778223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.778424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.778436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.778513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.778524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.778594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.778605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.778748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.778759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.778844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.778855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.778984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.778995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.779076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.779087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.779176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.779187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.779268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.779281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.779366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.779377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.779455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.779465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.779620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.779631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-07-16 01:32:42.779696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-07-16 01:32:42.779706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.779838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.779848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.779914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.779924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.780070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.780081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.780219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.780230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.780367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.780378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.780449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.780460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.780540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.780551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.780703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.780714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.780858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.780869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.780950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.780961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.781097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.781107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.781180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.781191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.781319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.781329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.781426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.781436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.781508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.781520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.781649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.781660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.781793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.781804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.781865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.781875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.781941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.781951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.782028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.782040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.782121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.782132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.782191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.782200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.782347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.782362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.782443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.782454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.782552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.782562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.782630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.782641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.782739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.782750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.782814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.782825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.782978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.782989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.783068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.783078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.783143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.783154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.783233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.783244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.783451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.783463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.783543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.783555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.783623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.783634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.783783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.783795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.783970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.783981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.784059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.784070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.784203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-07-16 01:32:42.784213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-07-16 01:32:42.784294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.784305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.784404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.784415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.784496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.784507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.784595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.784606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.784667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.784679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.784767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.784778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.784910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.784922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.785016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.785027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.785167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.785179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.785242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.785253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.785320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.785331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.785480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.785491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.785564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.785574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.785651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.785662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.785847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.785858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.785956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.785967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.786101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.786112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.786259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.786270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.786408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.786420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.786532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.786543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.786743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.786754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.786945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.786956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.787049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.787060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.787142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.787153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.787237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.787248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.787314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.787325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.787406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.787418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.787485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.787495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.787725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.787736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.787985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.787996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.788090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.788102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.788202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.788213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.788360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.788372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.788511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.788522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.788659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.788671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.788803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.788814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.788948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.788961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.789106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.789117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.789208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.789219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.789297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.789309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-07-16 01:32:42.789451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-07-16 01:32:42.789462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.789543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.789554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.789680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.789691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.789765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.789777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.789912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.789923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.790008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.790019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.790086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.790097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.790238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.790249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.790346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.790360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.790432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.790443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.790604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.790615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.790799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.790810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.791053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.791064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.791215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.791226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.791292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.791302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.791376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.791387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.791522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.791533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.791623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.791633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.791713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.791724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.791876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.791887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.792027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.792038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.792103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.792114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.792248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.792259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.792341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.792352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.792519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.792529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.792668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.792679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.792757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.792767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.792832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.792843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.792974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.792985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.793063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.793073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.793210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.793222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.793372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.793384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.793471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.793482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.793639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.793650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.793725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.793736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.793867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.793879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.794017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.794030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.794164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.794175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.794320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.794330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.794405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.794416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.794494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.794505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-07-16 01:32:42.794569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-07-16 01:32:42.794578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.794647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.794658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.794735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.794746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.794885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.794897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.795023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.795034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.795106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.795117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.795293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.795304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.795387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.795399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.795537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.795548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.795686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.795697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.795858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.795869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.795945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.795955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.796032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.796043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.796113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.796123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.796265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.796277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.796426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.796437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.796502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.796513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.796647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.796658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.796751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.796762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.796828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.796838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.796929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.796940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.797025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.797036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.797170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.797181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.797258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.797269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.797342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.797353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.797430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.797441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.797641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.797653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.797829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.797840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.797985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.797996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.798071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.798081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.798218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.798229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.798321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.798332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.798498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.798510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.798598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.798609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.798730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.798741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.798803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.798816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-07-16 01:32:42.798945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-07-16 01:32:42.798956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.799087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.799098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.799176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.799186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.799279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.799291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.799375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.799387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.799457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.799468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.799616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.799628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.799772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.799782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.799849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.799859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.799934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.799945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.800039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.800050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.800130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.800141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.800290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.800301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.800471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.800483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.800544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.800555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.800615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.800626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.800693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.800705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.800861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.800872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.800996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.801007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.801091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.801101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.801192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.801203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.801293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.801304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.801371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.801381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.801468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.801479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.801557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.801568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.801642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.801652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.801889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.801900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.801979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.801990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.802072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.802084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.802162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.802173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.802251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.802262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.802401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.802413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.802611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.802623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.802777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.802788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.802855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.802864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.802946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.802956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.803044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.803055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.803144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.803155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.803356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.803368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.803440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.803453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.803544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-07-16 01:32:42.803556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-07-16 01:32:42.803625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.803635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.803780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.803792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.803866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.803876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.803953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.803963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.804096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.804106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.804199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.804210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.804292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.804304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.804444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.804455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.804518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.804528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.804666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.804677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.804809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.804821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.804891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.804901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.804978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.804989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.805050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.805059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.805140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.805151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.805214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.805225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.805292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.805303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.805463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.805474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.805546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.805556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.805695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.805707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.805773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.805784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.805951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.805963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.806031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.806042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.806116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.806127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.806260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.806270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.806294] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.054 [2024-07-16 01:32:42.806363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.806376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.806440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.806449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.806580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.806590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.806723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.806735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.806818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.806829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.806969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.806981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.807153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.807165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.807308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.807319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.807456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.807468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.807565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.807575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.807658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.807669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.807804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.807815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.807987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.807999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.808172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.808214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.808335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-07-16 01:32:42.808377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-07-16 01:32:42.808548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.808566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.808778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.808794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.809011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.809027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.809201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.809217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.809328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.809349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.809490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.809506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.809649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.809665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.809832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.809848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.809990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.810006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.810152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.810168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.810330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.810357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.810522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.810538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.810614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.810630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.810774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.810790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.810932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.810948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.811096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.811112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.811265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.811279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.811411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.811423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.811509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.811520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.811606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.811617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.811702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.811713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.811914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.811926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.812058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.812069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.812148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.812159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.812305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.812316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.812388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.812398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.812469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.812492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.812550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.812559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.812626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.812636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.812773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.812785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.812845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.812855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.812931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.812941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.813030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.813041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.813187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.813198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.813357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.813367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.813437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.813447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.813596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.813608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.813756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.813768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.813846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.813859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.813989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.814001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-07-16 01:32:42.814155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-07-16 01:32:42.814166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.814394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.814409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.814547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.814558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.814757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.814768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.814837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.814848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.815026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.815038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.815122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.815133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.815225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.815236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.815452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.815465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.815548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.815558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.815629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.815640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.815719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.815731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.815873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.815884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.815967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.815979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.816180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.816192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.816320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.816331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.816413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.816425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.816586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.816596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.816691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.816703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.816780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.816791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.816870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.816881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.816968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.816979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.817192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.817203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.817332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.817347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.817411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.817420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.817495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.817506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.817574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.817584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.817673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.817685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.817785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.817796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.817924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.817935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.818084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.818095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.818245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.818256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.818392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.818405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.818543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.818554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.818627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.818638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.818777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-07-16 01:32:42.818788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-07-16 01:32:42.818858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.818869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.818947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.818958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.819180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.819193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.819261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.819271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.819368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.819380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.819459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.819471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.819552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.819563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.819720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.819732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.819793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.819802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.819870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.819881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.819952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.819963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.820025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.820035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.820119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.820130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.820204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.820216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.820311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.820321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.820402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.820412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.820506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.820517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.820592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.820603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.820670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.820680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.820764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.820775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.820851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.820861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.821011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.821022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.821092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.821103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.821193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.821204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.821342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.821354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.821426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.821436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.821591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.821602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.821737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.821749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.821815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.821825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.821969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.821980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.822122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.822134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.822272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.822282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.822369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.822381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.822467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.822478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.822658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.822669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.822738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.822749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.822891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.822903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.822978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.822988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.823057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.823068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.823130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.823139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-07-16 01:32:42.823217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-07-16 01:32:42.823239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.823327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.823342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.823430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.823444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.823529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.823539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.823742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.823752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.823890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.823901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.823975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.823986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.824124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.824135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.824212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.824222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.824366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.824379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.824511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.824522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.824606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.824618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.824689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.824700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.824839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.824849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.824947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.824958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.825041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.825051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.825197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.825208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.825283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.825293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.825520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.825532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.825667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.825678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.825754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.825765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.825847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.825857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.825944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.825956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.826026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.826035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.826118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.826130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.826293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.826304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.826449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.826462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.826548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.826558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.826644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.826655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.826785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.826796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.826885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.826896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.827030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.827041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.827128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.827140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.827271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.827282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.827367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.827379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.827450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.827460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.827606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.827617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.827706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.827717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.827866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.827876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.828077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.828088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.828239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-07-16 01:32:42.828249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-07-16 01:32:42.828324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.828334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.828418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.828432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.828592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.828603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.828681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.828693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.828772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.828782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.828926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.828938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.829021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.829031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.829178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.829188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.829270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.829281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.829375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.829386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.829471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.829483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.829554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.829563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.829697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.829708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.829785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.829795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.829859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.829869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.829960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.829971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.830102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.830114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.830265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.830276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.830347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.830361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.830436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.830447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.830529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.830541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.830631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.830642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.830778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.830789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.830927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.830938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.831010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.831021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.831081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.831091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.831158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.831168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.831257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.831269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.831358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.831369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.831457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.831468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.831536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.831548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.831646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.831656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.831794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.831805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.831949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.831960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.832112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.832123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.832206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.832217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.832349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.832360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.832493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.832503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.832587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.832598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.832760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.832771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-07-16 01:32:42.832856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-07-16 01:32:42.832868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.832999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.833012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.833083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.833095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.833169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.833180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.833335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.833350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.833429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.833440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.833525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.833536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.833602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.833613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.833751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.833761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.833828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.833839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.833906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.833917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.833991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.834002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.834075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.834086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.834158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.834169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.834244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.834254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.834413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.834425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.834516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.834527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.834590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.834600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.834688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.834699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.834766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.834777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.834858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.834869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.835021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.835032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.835121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.835133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.835272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.835282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.835370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.835381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.835459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.835470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.835543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.835553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.835628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.835638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.835710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.835720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.835812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.835823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.835884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.835894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.835968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.835979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.836043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.836054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.836123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.836133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.836214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.836225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.836354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.836365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.836507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.836518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.836581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.836592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-07-16 01:32:42.836746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-07-16 01:32:42.836758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.836836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.836847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.836987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.836998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.837086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.837099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.837229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.837240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.837465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.837476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.837623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.837634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.837772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.837783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.837919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.837929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.838072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.838083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.838230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.838241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.838308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.838319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.838394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.838406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.838534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.838555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.838626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.838638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.838704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.838714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.838776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.838788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.838930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.838941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.839025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.839037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.839113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.839124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.839189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.839201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.839334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.839362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.839435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.839446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.839588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.839600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.839677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.839687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.839763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.839774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.839857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.839868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.839944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.839955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.840112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.840124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.840189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.840200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.840355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.840367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.840435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.840446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.840540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.840553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.840645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.840656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.840894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.840906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.840991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.841001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.841132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.841144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.841285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.841296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.841430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.841449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.841537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.841548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.841621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.841633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.841793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.841805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.841895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.841906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.841992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.842005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-07-16 01:32:42.842071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-07-16 01:32:42.842080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.842215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.842226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.842296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.842307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.842374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.842387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.842481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.842492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.842638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.842650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.842730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.842741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.842897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.842908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.843111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.843121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.843198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.843208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.843291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.843301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.843370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.843380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.843468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.843479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.843642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.843654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.843795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.843808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.843887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.843899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.843977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.843989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.844131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.844144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.844212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.844224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.844304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.844316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.844461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.844476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.844545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.844557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.844632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.844643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.844777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.844789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.844855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.844866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.845009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.845022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.845096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.845107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.845237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.845251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.845461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.845474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.845611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.845622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.845694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.845706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.845790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.845801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.846028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.846041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.846110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.846121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.846197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.846208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.846269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.846279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.846360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.846372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.846505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.846517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.846647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.846660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.846742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.846757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.846822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.846833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.846965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.846976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.847064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.847077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.847200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.847211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-07-16 01:32:42.847361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-07-16 01:32:42.847373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.847467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.847481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.847572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.847584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.847663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.847674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.847770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.847783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.847922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.847932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.847999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.848010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.848086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.848097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.848232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.848243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.848390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.848404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.848504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.848515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.848694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.848706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.848851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.848863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.849118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.849131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.849350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.849362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.849458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.849470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.849539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.849550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.849629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.849640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.849865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.849877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.850031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.850042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.850199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.850210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.850415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.850427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.850543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.850588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.850695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.850714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.850900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.850927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.851106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.851118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.851273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.851285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.851362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.851374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.851461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.851471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.851616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.851626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.851707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.851718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.851803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.851814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.851910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.851921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.851984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.851994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.852074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.852085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.852157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.852170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.852249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.852260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.852322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.852332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.852425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.852437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.852726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.852738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.852872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.852883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.852973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.852984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.853075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.853086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.853167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.853177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.853242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.853252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.853369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-07-16 01:32:42.853381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-07-16 01:32:42.853521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.853532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.853610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.853620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.853713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.853723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.853789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.853799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.853880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.853890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.853959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.853971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.854113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.854124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.854271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.854282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.854417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.854428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.854571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.854582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.854713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.854724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.854806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.854816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.854913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.854923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.855060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.855072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.855151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.855161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.855226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.855238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.855332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.855348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.855436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.855447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.855516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.855528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.855670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.855681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.855748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.855759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.855837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.855848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.855994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.856006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.856155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.856166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.856237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.856248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.856316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.856327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.856411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.856424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.856670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.856681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.856739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.856749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.856825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.856836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.856904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.856916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.857005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.857016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.857164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.857175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.857238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.857247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.857311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.857321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.857402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.857413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.857486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.857497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.857637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.857648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.857824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.857835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.857906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.857916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.857990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.858001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.858132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-07-16 01:32:42.858144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-07-16 01:32:42.858300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.858311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.858405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.858416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.858549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.858560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.858739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.858749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.858827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.858839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.858929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.858940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.859021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.859031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.859164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.859175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.859257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.859267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.859398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.859410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.859498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.859509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.859586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.859597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.859656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.859666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.859824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.859835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.860021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.860041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0c4000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.860150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.860172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.860281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.860297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.860402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.860419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.860509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.860525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.860615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.860631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.860770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.860783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.860983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.860994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.861124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.861135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.861205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.861215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.861312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.861322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.861410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.861421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.861563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.861574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.861658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-07-16 01:32:42.861669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-07-16 01:32:42.861737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.861747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.861892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.861902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.861966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.861976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.862129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.862140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.862219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.862230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.862375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.862386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.862446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.862456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.862595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.862606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.862736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.862747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.862886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.862897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.863035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.863046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.863137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.863148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.863369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.863380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.863455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.863466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.863688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.863699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.863863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.863873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.864022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.864032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.864183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.864194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.864263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.864272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.864425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.864438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.864516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.864527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.864672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.864683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.864831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.864841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.864937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.864949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.865038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.865049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.865184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.865196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.865356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.865370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.865435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.865445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.865604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.865615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.865697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.865708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.865839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.865850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.866006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.866017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.866148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.866159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.866233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.866244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.866310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.866320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.866387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.866397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.866455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.866465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.866534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.866544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-07-16 01:32:42.866673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-07-16 01:32:42.866683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.866760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.866772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.866906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.866916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.866982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.866995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.867067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.867077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.867138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.867148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.867238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.867249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.867386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.867397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.867536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.867547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.867628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.867638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.867812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.867823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.867902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.867912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.867994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.868005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.868084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.868094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.868227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.868237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.868348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.868362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.868431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.868442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.868527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.868538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.868630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.868641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.868724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.868734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.868793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.868804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.868871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.868880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.868944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.868955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.869014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.869023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.869084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.869094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.869150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.869160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.869365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.869377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.869521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.869531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.869602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.869615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.869690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.869701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.869915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.869926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.870008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.870019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.870091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.870103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.870297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.870308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.870381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.870393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.870477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.870487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.870630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.870642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.870702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.870712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.870784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.870793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.870939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.870950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.871039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-07-16 01:32:42.871049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-07-16 01:32:42.871112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.871123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.871218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.871229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.871292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.871301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.871386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.871398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.871469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.871480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.871619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.871629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.871781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.871792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.871859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.871870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.871945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.871955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.872019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.872030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.872112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.872122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.872182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.872191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.872438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.872450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.872538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.872549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.872627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.872638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.872856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.872867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.873010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.873021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.873128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.873139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.873355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.873367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.873437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.873448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.873584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.873595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.873813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.873825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.873890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.873900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.873984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.873995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.874142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.874153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.874219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.874231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.874388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.874399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.874602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.874617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.874755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.874766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.874932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.874943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.875092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.875103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.875237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.875248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.875329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.875346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.875472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.875483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.875569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.875581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.875713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.875724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.875922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.875933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.875995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.876006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.876103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.876115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.876181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.876192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.876335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-07-16 01:32:42.876357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-07-16 01:32:42.876455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.876466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.876543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.876553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.876625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.876636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.876703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.876713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.876852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.876864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.876946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.876957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.877088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.877099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.877239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.877250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.877331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.877346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.877417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.877429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.877586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.877597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.877660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.877671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.877872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.877883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.877952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.877963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.878055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.878065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.878130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.878140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.878277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.878288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.878370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.878382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.878452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.878462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.878527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.878538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.878684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.878695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.878757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.878768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.878854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.878864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.878947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.878957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.879030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.879041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.879182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.879193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.879322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.879336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.879540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.879551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.879620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-07-16 01:32:42.879630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-07-16 01:32:42.879761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.879771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.879860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.879871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.879955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.879965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.880054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.880065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.880192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.880204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.880352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.880364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.880504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.880515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.880659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.880670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.880801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.880812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.880908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.880919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.881061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.881072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.881219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.881229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.881374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.881386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.881464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.881475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.881542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.881553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.881629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.881643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.881846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.881857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.881938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.881949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.882094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.882105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.882249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.882261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.882343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.882354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.882430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.882441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.882521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.882532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.882603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.882613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.882744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.882755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.882826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.882836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.882977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.882988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.883086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.883097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.883238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.883249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.883317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.883326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.883408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.883419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.883560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.883571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.883643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.883654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.883787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.883799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.883943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.883955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.884016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.884027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.884230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.884242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.884318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.884332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.884419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-07-16 01:32:42.884432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-07-16 01:32:42.884502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.884513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.884662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.884673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.884752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.884765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.884903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.884915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.885085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.885097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.885225] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.071 [2024-07-16 01:32:42.885250] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.071 [2024-07-16 01:32:42.885253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.885257] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.071 [2024-07-16 01:32:42.885264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 [2024-07-16 01:32:42.885265] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.885271] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.071 [2024-07-16 01:32:42.885331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.885360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.885384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:17.071 [2024-07-16 01:32:42.885508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.885520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.885490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:17.071 [2024-07-16 01:32:42.885604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.885601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:17.071 [2024-07-16 01:32:42.885615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.885683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.885693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.885601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:17.071 [2024-07-16 01:32:42.885844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.885855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.885928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.885938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.886026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.886038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.886168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.886180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.886331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.886349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.886432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.886443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.886533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.886544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.886679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.886690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.886757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.886766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.886899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.886910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.887173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.887186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.887248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.887257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.887394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.887407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.887496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.887508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.887602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.887613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.887762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.887773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.887855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.887866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.887944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.887955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.888110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.888122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.888187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.888198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.888333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.888353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.888555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.888566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.888633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.888642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.888771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.888782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.888842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.888852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.888940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.888965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.889129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.889146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.071 [2024-07-16 01:32:42.889305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.071 [2024-07-16 01:32:42.889321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.071 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.889412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.889425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.889571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.889582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.889718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.889729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.889931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.889941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.890149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.890160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.890243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.890254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.890397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.890409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.890557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.890568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.890699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.890710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.890868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.890879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.890960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.890971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.891047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.891058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.891129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.891139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.891274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.891286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.891424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.891437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.891513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.891523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.891660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.891670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.891758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.891769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.891838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.891849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.891999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.892012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.892086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.892096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.892245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.892257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.892354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.892365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.892460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.892474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.892615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.892627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.892715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.892726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.892860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.892871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.892943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.892954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.893021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.893033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.893199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.893210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.893347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.893363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.893462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.893474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.893550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.893561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.893694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.893706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.893778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.893790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.893928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.893939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.894068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.072 [2024-07-16 01:32:42.894080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.072 qpair failed and we were unable to recover it. 00:27:17.072 [2024-07-16 01:32:42.894167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.894181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.894324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.894346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.894583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.894596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.894678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.894690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.894776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.894787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.894947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.894960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.895054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.895065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.895198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.895210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.895347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.895358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.895431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.895443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.895520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.895531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.895664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.895675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.895826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.895836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.895926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.895937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.896144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.896156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.896223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.896235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.896472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.896485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.896558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.896569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.896721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.896733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.896881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.896893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.897140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.897152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.897355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.897369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.897459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.897470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.897614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.897626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.897758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.897769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.897855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.897866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.897936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.897947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.898158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.898169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.898268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.898280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.898372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.898384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.898449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.898460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.898606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.898617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.898707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.073 [2024-07-16 01:32:42.898718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.073 qpair failed and we were unable to recover it. 00:27:17.073 [2024-07-16 01:32:42.898782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.898794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.898856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.898868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.898966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.898978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.899120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.899131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.899208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.899219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.899419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.899432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.899518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.899529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.899662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.899680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.899900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.899912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.899991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.900002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.900144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.900156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.900291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.900302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.900394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.900407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.900540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.900551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.900728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.900740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.900808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.900819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.900968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.900981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.901067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.901078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.901261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.901273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.901430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.901442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.901643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.901657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.901827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.901839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.901978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.901990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.902080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.902092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.902162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.902172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.902254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.902264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.902422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.902435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.902568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.902580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.902727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.902740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.902820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.902831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.902895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.902905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.903050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.903062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.903153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.903164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.903233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.903244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.903375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.903389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.903468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.903480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.903619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.903631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.903696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.903706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.903835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.903846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.904033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.904046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.074 [2024-07-16 01:32:42.904113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.074 [2024-07-16 01:32:42.904124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.074 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.904326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.904342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.904485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.904496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.904577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.904588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.904728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.904740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.904810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.904821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.904955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.904968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.905026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.905040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.905175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.905187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.905252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.905262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.905393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.905407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.905511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.905523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.905612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.905623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.905773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.905785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.905859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.905871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.906010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.906021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.906097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.906108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.906193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.906204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.906344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.906356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.906417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.906427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.906496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.906507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.906671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.906681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.906764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.906774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.906859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.906871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.907012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.907023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.907099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.907110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.907251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.907263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.907335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.907353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.907424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.907433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.907653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.907666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.907755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.907767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.907938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.907950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.908035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.908047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.908128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.908138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.908225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.908237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.908303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.908313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.908397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.908408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.908567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.908579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.908649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.908659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.908743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.908753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.908909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.908922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.075 [2024-07-16 01:32:42.909082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.075 [2024-07-16 01:32:42.909093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.075 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.909172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.909183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.909315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.909326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.909402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.909415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.909494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.909506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.909583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.909594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.909738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.909752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.909819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.909829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.909903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.909915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.910042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.910054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.910309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.910322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.910412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.910424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.910581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.910592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.910669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.910680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.910838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.910849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.910947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.910959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.911046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.911058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.911199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.911212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.911356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.911368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.911435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.911446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.911610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.911621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.911776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.911787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.911993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.912005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.912062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.912072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.912217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.912229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.912317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.912328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.912400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.912411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.912543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.912555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.912630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.912641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.912723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.912734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.912891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.912902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.912986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.912997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.913071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.913082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.913155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.913167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.913244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.913256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.913333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.913350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.913441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.913452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.076 qpair failed and we were unable to recover it. 00:27:17.076 [2024-07-16 01:32:42.913536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.076 [2024-07-16 01:32:42.913547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.913683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.913695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.913842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.913854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.913918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.913929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.914061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.914072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.914211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.914222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.914393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.914406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.914480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.914490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.914565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.914577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.914660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.914675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.914900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.914912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.915054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.915065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.915120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.915130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.915215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.915226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.915291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.915303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.915370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.915382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.915477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.915488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.915747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.915759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.915850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.915862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.915935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.915946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.916100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.916112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.916183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.916195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.916395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.916408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.916492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.916503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.916580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.916592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.916733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.916744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.916825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.916835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.916978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.916990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.917151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.917163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.917218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.917227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.917298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.917308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.917394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.917408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.917496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.917508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.917572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.917582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.917678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.917691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.917769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.917780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.917895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.917938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.918082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.918109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.918195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.918211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.918356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.918373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.077 qpair failed and we were unable to recover it. 00:27:17.077 [2024-07-16 01:32:42.918456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.077 [2024-07-16 01:32:42.918473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.918573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.918590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.918657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.918669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.918767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.918777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.918842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.918851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.918938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.918949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.919019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.919028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.919107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.919117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.919257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.919269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.919347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.919360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.919491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.919503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.919637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.919647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.919722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.919732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.919809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.919819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.919958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.919969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.920050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.920060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.920137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.920148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.920279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.920291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.920364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.920374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.920450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.920461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.920598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.920608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.920669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.920679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.920758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.920768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.920839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.920850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.920991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.921002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.921107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.921119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.921225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.921236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.921297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.921307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.921369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.921380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.921456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.921466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.921616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.921627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.078 [2024-07-16 01:32:42.921773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.078 [2024-07-16 01:32:42.921784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.078 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.921876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.921887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.921953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.921963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.922050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.922061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.922120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.922129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.922315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.922335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.922490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.922506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.922597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.922613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.922752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.922765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.922846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.922856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.922923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.922935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.923084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.923095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.923232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.923244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.923444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.923455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.923538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.923549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.923682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.923694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.923823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.923834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.923893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.923903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.923982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.923997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.924129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.924140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.924220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.924232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.924311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.924322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.924408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.924420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.924495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.924506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.924652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.924663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.924804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.924815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.925013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.925024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.925088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.925098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.925166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.925177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.925244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.925256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.925400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.925411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.925486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.925498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.925573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.925584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.925806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.925817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.925907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.925917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.925996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.926008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.926085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.926095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.926171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.926182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.079 qpair failed and we were unable to recover it. 00:27:17.079 [2024-07-16 01:32:42.926258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.079 [2024-07-16 01:32:42.926270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.926360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.926371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.926437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.926447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.926543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.926554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.926685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.926696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.926782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.926793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.926938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.926948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.927023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.927035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.927092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.927102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.927166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.927175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.927327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.927341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.927540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.927551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.927700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.927710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.927782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.927793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.927870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.927881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.927967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.927977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.928230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.928242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.928390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.928402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.928554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.928566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.928634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.928644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.928714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.928725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.928802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.928813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.928887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.928897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.928969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.928979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.929041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.929051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.929183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.929195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.929327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.929343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.929476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.929488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.929558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.929569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.929641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.929652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.929779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.929790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.929867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.929879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.930020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.930031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.930120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.930131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.930310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.930321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.930440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.930451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.930583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.080 [2024-07-16 01:32:42.930594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.080 qpair failed and we were unable to recover it. 00:27:17.080 [2024-07-16 01:32:42.930728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.930739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.930893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.930905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.930984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.930995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.931073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.931084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.931175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.931186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.931259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.931270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.931356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.931367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.931514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.931525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.931661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.931672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.931740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.931751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.931827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.931842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.931908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.931920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.932004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.932015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.932160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.932172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.932290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.932303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.932375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.932385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.932458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.932470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.932628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.932639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.932710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.932723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.932800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.932811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.932876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.932887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.932973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.932984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.933064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.933076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.933243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.933256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.933408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.933421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.933509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.933521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.933716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.933728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.933891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.933902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.934114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.934127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.934295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.934306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.934476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.934488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.934740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.934752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.934888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.934900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.935046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.935058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.935205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.935216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.081 qpair failed and we were unable to recover it. 00:27:17.081 [2024-07-16 01:32:42.935378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.081 [2024-07-16 01:32:42.935390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.935555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.935567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.935730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.935741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.935953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.935965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.936151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.936162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.936234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.936245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.936330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.936345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.936586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.936599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.936850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.936863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.937104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.937117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.937241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.937253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.937450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.937464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.937610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.937623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.937824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.937837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.937930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.937942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.938094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.938109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.938238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.938250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.938380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.938392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.938589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.938603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.938802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.938815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.938967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.938980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.939131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.939144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.939311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.939323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.939497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.939512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.939719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.939733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.939934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.939947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.940079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.940090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.940259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.940271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.940439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.940450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.940543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.940555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.940683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.940695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.940821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.940832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.940980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.940991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.941225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.941236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.941401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.941413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.941567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.941578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.082 qpair failed and we were unable to recover it. 00:27:17.082 [2024-07-16 01:32:42.941781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-07-16 01:32:42.941793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.942017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.942028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.942178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.942189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.942408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.942419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.942645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.942657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.942856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.942867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.943090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.943101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.943344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.943355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.943523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.943534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.943738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.943750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.943845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.943856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.943942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.943953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.944099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.944111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.944332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.944348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.944436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.944448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.944600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.944611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.944833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.944843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.945048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.945059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.945192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.945203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.945409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.945424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.945672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.945683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.945860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.945871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.945952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.945963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.946166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.946177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.946324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.946334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.946598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.946610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.946827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.946838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.946980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.946991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.083 [2024-07-16 01:32:42.947075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-07-16 01:32:42.947086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.083 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.947313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.947324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.947552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.947564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.947732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.947743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.947883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.947894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.948045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.948056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.948189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.948201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.948358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.948369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.948615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.948626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.948830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.948842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.948995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.949006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.949240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.949251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.949314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.949324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.949544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.949557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.949776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.949787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.949937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.949948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.950108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.950119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.950357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.950369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.950612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.950624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.950849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.950860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.951094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.951106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.951192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.951203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.951341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.951353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.951610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.951623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.951722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.951733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.951877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.951889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.951981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.951992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.952135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.952147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.952226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.952237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.952387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.952399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.952529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.952540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.952625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.952638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.952869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.952881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.952957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.952967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.953128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.953140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.953361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.953373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.953515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.953526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.953676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-07-16 01:32:42.953689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.084 qpair failed and we were unable to recover it. 00:27:17.084 [2024-07-16 01:32:42.953767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.953778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.953983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.953995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.954155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.954166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.954365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.954377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.954519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.954530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.954726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.954737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.954818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.954827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.954986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.954996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.955141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.955152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.955366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.955378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.955623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.955635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.955821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.955832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.955993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.956005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.956244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.956256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.956477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.956489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.956642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.956653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.956874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.956886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.957050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.957062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.957257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.957268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.957406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.957417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.957646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.957657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.957887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.957899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.958051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.958062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.958145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.958154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.958419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.958431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.958573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.958584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.958675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.958685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.958884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.958896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.959119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.959130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.959298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.959310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.959381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.959392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.959475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.959485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.959617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.959630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.959831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.959844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.959999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.085 [2024-07-16 01:32:42.960011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.085 qpair failed and we were unable to recover it. 00:27:17.085 [2024-07-16 01:32:42.960228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.960240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.960460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.960472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.960610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.960622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.960787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.960798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.960885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.960897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.961033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.961044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.961265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.961277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.961496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.961508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.961708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.961720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.961870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.961881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.962015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.962026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.962256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.962267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.962405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.962416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.962558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.962570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.962816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.962828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.962908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.962918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.963148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.963159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.963366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.963378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.963583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.963595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.963695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.963706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.963927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.963946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.964101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.964113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.964263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.964275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.964367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.964377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.964452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.964462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.964598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.964609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.964776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.964787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.964933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.964944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.965085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.965098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.965177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.965186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.965411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.965423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.965496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.965506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.965639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.965651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.965751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.965761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.965903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.965915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.966142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.966153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.966300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.966311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.086 qpair failed and we were unable to recover it. 00:27:17.086 [2024-07-16 01:32:42.966534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.086 [2024-07-16 01:32:42.966546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.966767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.966780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.966926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.966938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.967161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.967173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.967372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.967384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.967483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.967494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.967584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.967595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.967817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.967828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.967901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.967910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.968046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.968058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.968259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.968271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.968498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.968510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.968653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.968664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.968808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.968820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.969016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.969028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.969207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.969218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.969353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.969365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.969441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.969451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.969636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.969648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.969816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.969828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.970053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.970064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.970232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.970243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.970439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.970449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.970652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.970664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.970873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.970884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.970969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.970978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.971126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.971137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.971284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.971295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.971436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.971449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.971621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.971632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.971805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.087 [2024-07-16 01:32:42.971816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.087 qpair failed and we were unable to recover it. 00:27:17.087 [2024-07-16 01:32:42.971963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.971974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.972113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.972124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.972367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.972378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.972473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.972484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.972568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.972578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.972742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.972753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.972990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.973001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.973152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.973163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.973314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.973324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.973500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.973513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.973643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.973656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.973757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.973767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.973916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.973928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.974066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.974077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.974235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.974247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.974323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.974333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.974536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.974548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.974632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.974642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.974727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.974737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.974896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.974908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.975134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.975145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.975322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.975333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.975421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.975433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.975605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.975616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.975782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.975794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.976029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.976040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.976255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.976266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.976485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.976497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.976659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.976669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.976818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.976829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.977070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.977081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.977313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.977325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.977563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.977576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.977726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.977737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.977910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.977921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.977999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.978011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.978234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.978245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.978458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.088 [2024-07-16 01:32:42.978469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.088 qpair failed and we were unable to recover it. 00:27:17.088 [2024-07-16 01:32:42.978733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.978744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.978910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.978922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.979164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.979175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.979251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.979261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.979458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.979470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.979667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.979679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.979809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.979821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.979960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.979971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.980198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.980211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.980433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.980445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.980691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.980702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.980857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.980869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.981088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.981102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.981312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.981324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.981493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.981505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.981663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.981675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.981851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.981862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.982032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.982044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.982280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.982292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.982443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.982455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.089 [2024-07-16 01:32:42.982551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.089 [2024-07-16 01:32:42.982561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.089 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.982710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.982722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.982897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.982908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.983162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.983174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.983354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.983365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.983497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.983508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.983705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.983717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.983859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.983871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.984023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.984034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.984269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.984281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.984491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.984503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.984742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.984753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.984999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.985010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.985150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.985161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.985371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.985383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.985529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.985540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.985768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.985779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.985861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.985871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.986019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.986030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.986180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.986192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.986400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.986411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.986508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.986519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.986761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.986772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.986969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.986981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.987231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.987243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.987396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.987408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.987629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.987640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.987728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.987738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.987892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.987904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.988109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.988120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.988344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.988356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.988633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.988644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.988832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.988845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.988995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.989007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.989230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.989242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.989326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.989340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.989507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.989519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.989653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.989664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.989838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.989849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.989931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.989941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.990146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.990157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.990234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.990244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.990469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.990481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.990574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.990584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.990725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.990736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.990875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.990887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.991098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.991110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.991246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.991257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.991455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.991467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.991614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.991626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.991757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.991768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.991987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.991998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.992133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.992144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-07-16 01:32:42.992371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-07-16 01:32:42.992382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.992589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.992601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.992732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.992744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.992940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.992951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.993113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.993124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.993256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.993267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.993355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.993366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.993508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.993520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.993737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.993748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.993885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.993897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.994062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.994073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.994247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.994258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.994483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.994495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.994640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.994652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.994847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.994858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.995102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.995114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.995343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.995356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.995519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.995530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.995662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.995673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.995818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.995832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.996074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.996086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.996161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.996171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.996370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.996381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.996511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.996522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.996777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.996788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.997001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.997012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.997254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.997265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.997491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.997503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.997645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.997656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.997796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.997808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.998057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.998069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.998162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.998172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.998391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.998403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.998632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.998643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.998841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.998852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.999013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.999024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.999256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.999267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.999490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.999501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.999651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.999663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:42.999887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:42.999899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.000075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.000087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.000266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.000277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.000483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.000495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.000635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.000646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.000892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.000903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.001116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.001127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.001301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.001313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.001457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.001470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.001689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.001700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.001924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.001935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.002000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.002010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.002258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.002270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.002445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.002458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.002700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.002711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.002798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.002808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.003004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.003016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.003114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.003125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.003263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.003274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.003356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.003366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.003589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.003603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.003745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.003756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.003984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.003996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.004220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.004232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.004458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.004469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.004615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.004626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.004848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.004860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.005036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.005047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.005274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.005286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.005531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.005544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.005778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.005790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.005939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.005950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.006163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.006175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.006374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.006386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.006613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.006625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.006792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-07-16 01:32:43.006803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-07-16 01:32:43.006887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.006897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.007096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.007109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.007331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.007345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.007477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.007488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.007684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.007696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.007860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.007871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.008081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.008093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.008225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.008236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.008459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.008471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.008710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.008722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.008957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.008969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.009169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.009181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.009259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.009270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.009475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.009487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.009706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.009718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.009944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.009956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.010181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.010193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.010357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.010369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.010512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.010523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.010718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.010729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.010882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.010894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.011092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.011104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.011250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.011261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.011462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.011474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.011608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.011621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.011867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.011879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.012024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.012035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.012186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.012198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.012395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.012407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.012640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.012651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.012870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.012881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.013076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.013087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.013216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.013228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.013383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.013395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.013621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.013633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.013830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.013841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.013968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.013981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.014131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.014142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.014316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.014327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.014503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.014514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.014647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.014658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.014880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.014891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.015088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.015099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.015253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.015265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.015475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.015487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.015627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.015639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.015882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.015894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.016039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.016051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.016271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.016283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.016416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.016428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.016584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.016595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.016794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.016805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.016893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.016903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.017053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.017064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.017288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.017298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.017496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.017509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.017652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.017664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.017810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.017822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.017968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.017979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.018121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.018133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.018281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.018292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.018443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.018455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.018668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.018680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.018877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.018888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.019030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.019043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.019242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.019253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.019476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-07-16 01:32:43.019487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-07-16 01:32:43.019711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.019722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.019946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.019957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.020106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.020117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.020280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.020292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.020489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.020501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.020590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.020600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.020849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.020861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.021088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.021100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.021249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.021260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.021407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.021420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.021553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.021565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.021636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.021646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.021795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.021806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.021894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.021905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.022166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.022177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.022380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.022392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.022557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.022568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.022658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.022669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.022898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.022910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.023153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.023165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.023311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.023322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.023423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.023434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.023647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.023659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.023825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.023836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.024033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.024046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.024246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.024258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.024445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.024457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.024664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.024675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.024899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.024911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.025062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.025073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.025219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.025231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.025301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.025311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.025513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.025526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.025685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.025696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.025906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.025918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.026134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.026146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.026351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.026362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.026535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.026547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.026711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.026722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.026919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.026931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.027128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.027140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.027343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.027354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.027446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.027456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.027586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.027598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.027793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.027804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.027938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.027949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.028153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.028165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.028292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.028303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.028500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.028512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.028682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.028693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.028792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.028803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.028946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.028958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.029179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.029191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.029325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.029340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.029488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.029500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.029698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.029710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.029927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.029939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.030163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.030174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.030395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.030407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.030626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.030637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.030792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.030802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.030970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.030981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.031198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.031210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.031380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.031391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.031537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.031551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.031720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.031732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.031867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.031878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.032011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.032023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.032193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.032204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.032346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.032358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.032561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.032573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.032743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.032754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.032828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.032838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.033000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.033012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.033209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.033220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.033300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.033311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.033453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.033466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-07-16 01:32:43.033598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-07-16 01:32:43.033609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.033696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.033706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.033923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.033935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.034076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.034087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.034328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.034342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.034440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.034450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.034684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.034696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.034934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.034945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.035114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.035125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.035320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.035331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.035562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.035574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.035794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.035805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.036005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.036016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.036100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.036110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.036195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.036206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.036420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.036432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.036596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.036607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.036824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.036835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.036998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.037010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.037235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.037247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.037434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.037447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.037587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.037598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.037688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.037698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.037894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.037905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.038124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.038136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.038231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.038241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.038464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.038475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.038627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.038640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.038882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.038894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.039036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.039049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.039187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.039198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.039395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.039407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.039604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.039615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.039745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.039756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.040011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.040022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.040231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.040242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.040392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.040405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.040642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.040653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.040861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.040872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.041018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.041029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.041183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.041195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.041362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.041375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.041581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.041592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.041734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.041746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.041991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.042003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.042201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.042212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.042412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.042423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.042507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.042517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.042613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.042624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.042779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.042790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.042917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.042929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.043132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.043144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.043232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.043242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.043322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.043332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.043490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.043502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.043666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.043677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.043824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.043836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.044033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.044046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.044262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.044273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.044495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.044507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.044649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.044660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.044809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.044820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.044903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.044914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.045087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.045098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.045270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.045282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.045428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.045440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.045594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.045604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.045745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.045759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.045990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.046002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.046223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.046234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.046518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-07-16 01:32:43.046530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-07-16 01:32:43.046725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.046736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.046931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.046942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.047080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.047091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.047289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.047301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.047463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.047475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.047560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.047570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.047727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.047739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.047943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.047955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.048049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.048060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.048133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.048143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.048275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.048287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.048362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.048372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.048514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.048526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.048676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.048687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.048830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.048842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.048995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.049005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.049147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.049159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.049410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.049423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.049621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.049632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.049859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.049871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.050009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.050020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.050215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.050226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.050393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.050404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.050602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.050613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.050842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.050854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.050994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.051005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.051169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.051181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.051388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.051400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.051547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.051558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.051689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.051700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.051863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.051874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.052017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.052028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.052226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.052238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.052341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.052352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.052433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.052443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.052605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.052616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.052841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.052854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.052998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.053010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.053213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.053225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.053298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.053308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.053404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.053415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.053486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.053497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.053651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.053663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.053814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.053825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.053953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.053964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.054045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.054055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.054275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.054288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.054492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.054504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.054676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.054687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.054782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.054793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.054866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.054877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.055025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.055037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.055207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.055219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.055373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.055385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.055477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.055489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.055587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.055600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.055750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.055762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.055892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.055903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.055996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.056008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.056171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.056182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.056334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.056358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.056575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.056587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.056727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.056738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.056894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.056905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.057118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.057130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.057266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.057278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.057476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.057488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.057571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.057581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.057760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.057772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.057857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.057869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.057959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.057972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.058142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.058154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.058382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.058395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.058472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.058482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.058686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.058698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.058829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.058840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.059009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.059023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.059174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-07-16 01:32:43.059185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-07-16 01:32:43.059402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.059414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.059612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.059624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.059846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.059858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.060034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.060046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.060176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.060187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.060267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.060278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.060368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.060380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.060470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.060481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.060630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.060641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.060797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.060809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.061023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.061035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.061176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.061187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.061360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.061373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.061569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.061580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.061757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.061769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.061992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.062004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.062233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.062244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.062468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.062480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.062623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.062635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.062817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.062828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.062997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.063009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.063076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.063087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.063289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.063300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.063397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.063409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.063494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.063504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.063645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.063657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.063878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.063890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.064050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.064062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.064197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.064209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.064460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.064472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.064563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.064574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.064771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.064782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.064971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.064982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.065154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.065167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.065340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.065355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.065503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.065514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.065731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.065743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.065941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.065951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.066099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.066112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.066205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.066217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.066366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.066380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.066524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.066535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.066667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.066678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.066883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.066894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.067053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.067064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.067158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.067169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.067391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.067403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.067481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.067491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.067567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.067578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.067716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.067727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.067868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.067879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.068032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.068043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.068243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.068255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.068392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.068404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.068488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.068498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.068632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.068644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.068791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.068802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.068883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.068894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.069130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.069141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.069273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.069285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.069383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.069396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.069541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.069552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.069628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.069638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.069804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.069816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.069916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.069927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.070020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.070032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.070232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.070243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.070462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.070475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.070581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.070594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.070763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.070775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.070918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.070929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.071097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.071109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.071196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.071210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.071442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.071455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.071605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.071616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.071782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.071793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.071877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.071887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.071971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-07-16 01:32:43.071983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-07-16 01:32:43.072061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.072073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.072230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.072241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.072491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.072502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.072693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.072704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.072901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.072913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.073060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.073071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.073167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.073179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.073260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.073270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.073419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.073431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.073634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.073645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.073733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.073743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.073818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.073829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.073976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.073988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.074128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.074139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.074390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.074403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.074591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.074602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.074754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.074766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.074916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.074927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.075006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.075017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.075240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.075252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.075324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.075334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.075559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.075570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.075767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.075779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.076011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.076022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.076217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.076228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.076474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.076486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.076579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.076590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.076730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.076741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.076845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.076857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.076946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.076957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.077227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.077238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.077485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.077496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.077593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.077605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.077676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.077686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.077828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.077839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.077929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.077940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.078137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.078149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.078359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.078372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.078478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.078490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.078575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.078585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.078807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.078821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.079049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.079061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.079283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.079295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.079431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.079443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.079596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.079608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.079751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.079763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.079967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.079978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.080109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.080121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.080206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.080216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.080380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.080391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.080541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.080552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.080773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.080785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.081030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.081041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.081280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.081292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.081452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.081464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.081687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.081698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.081837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.081848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.082001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.082012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.082103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.082113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.082260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.082273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.082405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.082417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.082516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.082527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.082801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.082812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.082954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.082965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.083191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.083203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.083293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.083304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.083392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.083402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.083556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.083568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.083770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.083782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.084068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.084080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.084176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-07-16 01:32:43.084187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-07-16 01:32:43.084326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.084341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.084498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.084509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.084731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.084742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.084893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.084904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.085129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.085140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.085372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.085384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.085525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.085537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.085759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.085771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.085851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.085861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.086023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.086036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.086253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.086264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.086475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.086488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.086653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.086665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.086751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.086761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.086903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.086914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.087114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.087125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.087293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.087305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.087452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.087464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.087598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.087609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.087711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.087722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.087853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.087864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.087943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.087953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.088152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.088163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.088319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.088331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.088510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.088521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.088663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.088675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.088758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.088768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.089005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.089016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.089241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.089252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.089397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.089409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.089565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.089576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.089710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.089721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.089816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.089826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.090064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.090076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.090244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.090256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.090400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.090413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.090507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.090519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.090601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.090611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.090835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.090846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.091059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.091070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.091278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.091289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.091440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.091452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.091608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.091620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.091718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.091728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.091954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.091965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.092187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.092198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.092394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.092406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.092539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.092550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.092748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.092759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.092973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.092986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.093073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.093083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.093228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.093240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.093405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.093417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.093628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.093639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.093847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.093858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.094075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.094087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.094268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.094280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.094484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.094496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.094703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.094715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.094811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.094821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.095006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.095017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.095192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.095203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.095381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.095393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.095546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.095558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.095649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.095659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.095808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.095819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.095991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.096003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.096066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.096076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.096219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.096230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.096379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.096390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.096590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.096601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.096812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.096824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.096973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.096984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.097131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.097143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.097285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.097296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.097464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.097475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 A controller has encountered a failure and is being reset. 00:27:17.371 [2024-07-16 01:32:43.097694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-07-16 01:32:43.097724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2cfc0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-07-16 01:32:43.097949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.097977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.098067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.098085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.098230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.098246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.098350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.098367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.098606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.098623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.098784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.098800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.098964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.098980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.099059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.099074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.099231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.099247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.099428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.099446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.099593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.099609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.099794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.099810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.099905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.099927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.100080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.100096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.100257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.100274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.100383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.100401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.100508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.100524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.100618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.100634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0b4000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.100723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.100736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.100810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.100820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.100896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.100906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.101111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.101122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.101261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.101272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.101445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.101457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.101529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.101539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.101619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.101630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.101697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.101708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.101774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.101784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.101914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.101924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.102085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.102096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.102173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.102183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.102252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.102262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.102324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.102335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.102430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.102441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.102506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.102516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.102647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.102658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.102743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.102753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.102828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.102839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.102972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.102984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.103130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.103141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.103209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.103219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.103286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.103296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.103380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.103391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.103541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.103552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.103627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.103637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.103726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.103737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.103801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.103812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.103900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.103911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.104101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.104112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.104309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.104321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.104470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.104482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.104631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.104643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.104740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.104752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.104931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.104942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.105071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.105082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.105150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.105160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.105331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.105346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.105405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.105416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.105636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.105647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.105743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.105753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.105916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.105927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.106058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.106069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.106219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.106230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.106322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.106333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.106472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.106485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.106561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.106572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.106637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.106648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.106797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.106808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.106965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.106976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.107117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.107129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.107264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.107276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-07-16 01:32:43.107356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-07-16 01:32:43.107366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.107587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.107599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.107745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.107756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.107901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.107912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.107990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.108000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.108144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.108155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.108291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.108302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.108535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.108546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.108644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.108656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.108757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.108768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.108902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.108926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.109074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.109085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.109170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.109182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.109325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.109344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.109559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.109570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.109770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.109781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.109922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.109933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.110004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.110014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.110097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.110108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.110170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.110180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.110351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.110363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.110511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.110531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.110687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.110699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.110896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.110907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.110982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.110993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.111077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.111087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.111168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.111180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.111254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.111264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.111324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.111351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.111432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.111442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.111650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.111661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.111742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.111754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.111841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.111852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.111913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.111923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.112071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.112083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.112177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.112189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.112321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.112331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.112415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.112425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.112572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.112582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.112662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.112672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.112868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.112879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.112957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.112968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.113097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.113107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.113193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.113204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.113390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.113402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.113537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.113549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.113619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.113629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.113700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.113710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.113800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.113810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.113875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.113886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.113947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.113957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.114084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.114095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.114291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.114303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.114501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.114513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.114673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.114684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.114772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.114783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.114862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.114872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.114964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.114975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.115063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.115073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.115223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.115235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.115452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.115465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.115545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.115558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.115689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.115700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.115864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.115875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.115951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.115960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.116109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.116121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.116264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.116275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.116375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.116385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.116457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.116466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.116599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.116611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.116752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.116764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.116844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.116854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.117001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.117013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.117102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.117113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.117184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.117193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.117325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.117347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.117494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.117505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.117574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.117584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.117714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-07-16 01:32:43.117725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-07-16 01:32:43.117864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.117876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.118013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.118024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.118150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.118161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.118224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.118234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.118379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.118390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.118529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.118540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.118704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.118715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.118900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.118912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.118987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.118998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.119163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.119174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.119300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.119311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.119375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.119386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.119587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.119598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.119766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.119777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.119936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.119948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.120039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.120050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.120172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.120184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.120277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.120287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.120378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.120391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.120484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.120495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.120663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.120674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.120873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.120885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.120950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.120963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.121042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.121053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.121196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.121207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.121414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.121425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.121522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.121533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.121596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.121606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.121742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.121753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.121970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.121981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.122078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.122088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.122177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.122189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.122271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.122281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.122412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.122423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.122574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.122585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.122681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.122693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.122865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.122876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.123085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.123097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.123239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.123250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.123340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.123354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.123439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.123449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.123605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.123616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.123700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.123710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.123870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.123880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.124023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.124035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.124120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.124130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.124226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.124238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.124439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.124450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.124591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.124602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.124750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.124761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.124853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.124864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.124995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.125006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.125151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.125161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.125281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.125292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.125438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.125449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.125541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.125552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.125626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.125637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.125821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.125833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.125989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.125999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.126135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.126147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.126282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.126293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.126386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.126396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.126479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.126493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.126579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.126590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.126672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.126682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.126810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.126820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.126894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.126905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.126978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.126988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.127065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.127076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.127146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.127156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.127224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.127235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-07-16 01:32:43.127388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-07-16 01:32:43.127400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.127579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.127591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.127680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.127691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.127791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.127802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.127889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.127900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.128044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.128055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.128141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.128152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.128217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.128227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.128295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.128305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.128394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.128405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.128601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.128612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.128682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.128692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.128824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.128835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.128884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.128894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.129010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.129021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.129168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.129179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.129379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.129390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.129478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.129490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff0bc000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-07-16 01:32:43.129764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-07-16 01:32:43.129805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3b040 with addr=10.0.0.2, port=4420 00:27:17.375 [2024-07-16 01:32:43.129820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b040 is same with the state(5) to be set 00:27:17.375 [2024-07-16 01:32:43.129837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3b040 (9): Bad file descriptor 00:27:17.375 [2024-07-16 01:32:43.129859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.375 [2024-07-16 01:32:43.129869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.375 [2024-07-16 01:32:43.129881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.375 Unable to reset the controller. 00:27:17.632 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.632 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:17.632 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.632 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.633 Malloc0 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.633 [2024-07-16 01:32:43.590362] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.633 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.633 [2024-07-16 01:32:43.619397] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.890 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.890 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:17.890 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.890 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.890 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.890 01:32:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3543933 00:27:18.147 Controller properly reset. 00:27:23.404 Initializing NVMe Controllers 00:27:23.404 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:23.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:23.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:23.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:23.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:23.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:23.404 Initialization complete. Launching workers. 00:27:23.404 Starting thread on core 1 00:27:23.404 Starting thread on core 2 00:27:23.404 Starting thread on core 3 00:27:23.404 Starting thread on core 0 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:23.404 00:27:23.404 real 0m11.318s 00:27:23.404 user 0m36.780s 00:27:23.404 sys 0m5.785s 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:23.404 ************************************ 00:27:23.404 END TEST nvmf_target_disconnect_tc2 00:27:23.404 ************************************ 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.404 rmmod nvme_tcp 00:27:23.404 rmmod nvme_fabrics 00:27:23.404 rmmod nvme_keyring 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3544627 ']' 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3544627 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3544627 ']' 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3544627 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3544627 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3544627' 00:27:23.404 killing process with pid 3544627 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3544627 00:27:23.404 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3544627 00:27:23.663 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:23.663 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:23.663 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:23.663 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:23.663 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:23.663 01:32:49 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.663 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:23.663 01:32:49 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.568 01:32:51 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:25.568 00:27:25.568 real 0m19.103s 00:27:25.568 user 1m3.771s 00:27:25.568 sys 0m10.139s 00:27:25.568 01:32:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:25.568 01:32:51 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:25.568 ************************************ 00:27:25.568 END TEST nvmf_target_disconnect 00:27:25.568 ************************************ 00:27:25.568 01:32:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:25.568 01:32:51 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:25.568 01:32:51 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:25.568 01:32:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:25.568 01:32:51 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:25.568 00:27:25.568 real 20m51.791s 00:27:25.568 user 45m8.391s 00:27:25.568 sys 6m22.158s 00:27:25.568 01:32:51 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:25.568 01:32:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:25.568 ************************************ 00:27:25.568 END TEST nvmf_tcp 00:27:25.568 ************************************ 00:27:25.827 01:32:51 -- common/autotest_common.sh@1142 -- # return 0 00:27:25.827 01:32:51 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:25.827 01:32:51 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:25.827 01:32:51 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:25.827 01:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:25.827 01:32:51 -- common/autotest_common.sh@10 -- # set +x 00:27:25.827 ************************************ 00:27:25.827 START TEST spdkcli_nvmf_tcp 00:27:25.827 ************************************ 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:25.827 * Looking for test storage... 00:27:25.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.827 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3546159 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3546159 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3546159 ']' 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:25.828 01:32:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:25.828 [2024-07-16 01:32:51.778645] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:27:25.828 [2024-07-16 01:32:51.778688] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3546159 ] 00:27:25.828 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.086 [2024-07-16 01:32:51.835351] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:26.086 [2024-07-16 01:32:51.907193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.086 [2024-07-16 01:32:51.907196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.649 01:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:26.649 01:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:26.649 01:32:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:26.649 01:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:26.649 01:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:26.649 01:32:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:26.649 01:32:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:26.650 01:32:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:26.650 01:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:26.650 01:32:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:26.650 01:32:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:26.650 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:26.650 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:26.650 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:26.650 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:26.650 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:26.650 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:26.650 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:26.650 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:26.650 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:26.650 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:26.650 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:26.650 ' 00:27:29.175 [2024-07-16 01:32:54.968060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.547 [2024-07-16 01:32:56.143987] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:32.441 [2024-07-16 01:32:58.330587] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:34.334 [2024-07-16 01:33:00.248657] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:35.704 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:35.704 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:35.704 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:35.704 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:35.704 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:35.704 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:35.704 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:35.704 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:35.704 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:35.704 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:35.704 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:35.705 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:35.705 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:35.705 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:35.705 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:35.962 01:33:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:35.962 01:33:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:35.962 01:33:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.962 01:33:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:35.962 01:33:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:35.962 01:33:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.962 01:33:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:35.962 01:33:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:36.219 01:33:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:36.477 01:33:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:36.477 01:33:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:36.477 01:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:36.477 01:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.477 01:33:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:36.477 01:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:36.477 01:33:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.477 01:33:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:36.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:36.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:36.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:36.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:36.477 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:36.477 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:36.477 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:36.477 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:36.477 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:36.477 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:36.477 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:36.477 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:36.477 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:36.477 ' 00:27:41.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:41.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:41.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:41.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:41.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:41.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:41.753 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:41.753 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:41.753 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:41.753 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:41.753 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:41.753 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:41.753 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:41.753 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3546159 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3546159 ']' 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3546159 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3546159 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3546159' 00:27:41.753 killing process with pid 3546159 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3546159 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3546159 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3546159 ']' 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3546159 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3546159 ']' 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3546159 00:27:41.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3546159) - No such process 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3546159 is not found' 00:27:41.753 Process with pid 3546159 is not found 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:41.753 00:27:41.753 real 0m15.865s 00:27:41.753 user 0m32.951s 00:27:41.753 sys 0m0.693s 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.753 01:33:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:41.753 ************************************ 00:27:41.753 END TEST spdkcli_nvmf_tcp 00:27:41.753 ************************************ 00:27:41.753 01:33:07 -- common/autotest_common.sh@1142 -- # return 0 00:27:41.753 01:33:07 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:41.753 01:33:07 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:41.753 01:33:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.753 01:33:07 -- common/autotest_common.sh@10 -- # set +x 00:27:41.753 ************************************ 00:27:41.753 START TEST nvmf_identify_passthru 00:27:41.753 ************************************ 00:27:41.753 01:33:07 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:41.753 * Looking for test storage... 00:27:41.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:41.753 01:33:07 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:41.753 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:41.753 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:41.753 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:41.753 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:41.753 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:41.753 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:41.753 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:41.753 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:41.753 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.754 01:33:07 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.754 01:33:07 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.754 01:33:07 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.754 01:33:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.754 01:33:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.754 01:33:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.754 01:33:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:41.754 01:33:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:41.754 01:33:07 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:41.754 01:33:07 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:41.754 01:33:07 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:41.754 01:33:07 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:41.754 01:33:07 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.754 01:33:07 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.754 01:33:07 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.754 01:33:07 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:41.754 01:33:07 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:41.754 01:33:07 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.754 01:33:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:41.754 01:33:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:41.754 01:33:07 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:41.754 01:33:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:47.015 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:47.015 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:47.015 Found net devices under 0000:86:00.0: cvl_0_0 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.015 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:47.016 Found net devices under 0000:86:00.1: cvl_0_1 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:47.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:27:47.016 00:27:47.016 --- 10.0.0.2 ping statistics --- 00:27:47.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.016 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:27:47.016 00:27:47.016 --- 10.0.0.1 ping statistics --- 00:27:47.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.016 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:47.016 01:33:12 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:47.016 01:33:12 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:47.016 01:33:12 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:27:47.016 01:33:12 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:27:47.016 01:33:12 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:47.016 01:33:12 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:47.016 01:33:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:47.016 01:33:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:47.016 01:33:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:47.016 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.271 01:33:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:27:52.271 01:33:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:52.271 01:33:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:52.271 01:33:17 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:52.271 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.453 01:33:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:56.453 01:33:22 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:56.453 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:56.453 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.453 01:33:22 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:56.453 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:56.453 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.453 01:33:22 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3553697 00:27:56.453 01:33:22 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:56.453 01:33:22 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3553697 00:27:56.453 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3553697 ']' 00:27:56.453 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.453 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.453 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.453 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.453 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.453 01:33:22 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:56.453 [2024-07-16 01:33:22.100469] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:27:56.453 [2024-07-16 01:33:22.100521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.453 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.453 [2024-07-16 01:33:22.158545] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:56.453 [2024-07-16 01:33:22.242605] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.453 [2024-07-16 01:33:22.242645] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.453 [2024-07-16 01:33:22.242656] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.454 [2024-07-16 01:33:22.242662] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.454 [2024-07-16 01:33:22.242666] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.454 [2024-07-16 01:33:22.242720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.454 [2024-07-16 01:33:22.242764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.454 [2024-07-16 01:33:22.242766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.454 [2024-07-16 01:33:22.242736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.020 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.020 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:27:57.020 01:33:22 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:57.020 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.020 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.020 INFO: Log level set to 20 00:27:57.020 INFO: Requests: 00:27:57.020 { 00:27:57.020 "jsonrpc": "2.0", 00:27:57.020 "method": "nvmf_set_config", 00:27:57.020 "id": 1, 00:27:57.020 "params": { 00:27:57.020 "admin_cmd_passthru": { 00:27:57.020 "identify_ctrlr": true 00:27:57.020 } 00:27:57.020 } 00:27:57.020 } 00:27:57.020 00:27:57.020 INFO: response: 00:27:57.020 { 00:27:57.020 "jsonrpc": "2.0", 00:27:57.020 "id": 1, 00:27:57.020 "result": true 00:27:57.020 } 00:27:57.020 00:27:57.020 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.020 01:33:22 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:57.020 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.020 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.020 INFO: Setting log level to 20 00:27:57.020 INFO: Setting log level to 20 00:27:57.020 INFO: Log level set to 20 00:27:57.020 INFO: Log level set to 20 00:27:57.020 INFO: Requests: 00:27:57.020 { 00:27:57.020 "jsonrpc": "2.0", 00:27:57.020 "method": "framework_start_init", 00:27:57.020 "id": 1 00:27:57.020 } 00:27:57.020 00:27:57.020 INFO: Requests: 00:27:57.020 { 00:27:57.020 "jsonrpc": "2.0", 00:27:57.020 "method": "framework_start_init", 00:27:57.020 "id": 1 00:27:57.020 } 00:27:57.020 00:27:57.020 [2024-07-16 01:33:22.989253] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:57.020 INFO: response: 00:27:57.020 { 00:27:57.020 "jsonrpc": "2.0", 00:27:57.020 "id": 1, 00:27:57.020 "result": true 00:27:57.020 } 00:27:57.020 00:27:57.020 INFO: response: 00:27:57.020 { 00:27:57.020 "jsonrpc": "2.0", 00:27:57.020 "id": 1, 00:27:57.020 "result": true 00:27:57.020 } 00:27:57.020 00:27:57.020 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.020 01:33:22 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:57.020 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.020 01:33:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.020 INFO: Setting log level to 40 00:27:57.020 INFO: Setting log level to 40 00:27:57.020 INFO: Setting log level to 40 00:27:57.020 [2024-07-16 01:33:22.998824] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.020 01:33:23 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.020 01:33:23 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:57.020 01:33:23 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.020 01:33:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:57.347 01:33:23 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:57.347 01:33:23 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.347 01:33:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:00.623 Nvme0n1 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.623 01:33:25 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.623 01:33:25 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.623 01:33:25 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:00.623 [2024-07-16 01:33:25.887444] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.623 01:33:25 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:00.623 [ 00:28:00.623 { 00:28:00.623 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:00.623 "subtype": "Discovery", 00:28:00.623 "listen_addresses": [], 00:28:00.623 "allow_any_host": true, 00:28:00.623 "hosts": [] 00:28:00.623 }, 00:28:00.623 { 00:28:00.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.623 "subtype": "NVMe", 00:28:00.623 "listen_addresses": [ 00:28:00.623 { 00:28:00.623 "trtype": "TCP", 00:28:00.623 "adrfam": "IPv4", 00:28:00.623 "traddr": "10.0.0.2", 00:28:00.623 "trsvcid": "4420" 00:28:00.623 } 00:28:00.623 ], 00:28:00.623 "allow_any_host": true, 00:28:00.623 "hosts": [], 00:28:00.623 "serial_number": "SPDK00000000000001", 00:28:00.623 "model_number": "SPDK bdev Controller", 00:28:00.623 "max_namespaces": 1, 00:28:00.623 "min_cntlid": 1, 00:28:00.623 "max_cntlid": 65519, 00:28:00.623 "namespaces": [ 00:28:00.623 { 00:28:00.623 "nsid": 1, 00:28:00.623 "bdev_name": "Nvme0n1", 00:28:00.623 "name": "Nvme0n1", 00:28:00.623 "nguid": "FAF82AEC3AE440168AC308EF303EF0CB", 00:28:00.623 "uuid": "faf82aec-3ae4-4016-8ac3-08ef303ef0cb" 00:28:00.623 } 00:28:00.623 ] 00:28:00.623 } 00:28:00.623 ] 00:28:00.623 01:33:25 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.623 01:33:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:00.623 01:33:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:00.623 01:33:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:00.623 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.623 01:33:26 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:28:00.623 01:33:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:00.623 01:33:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:00.623 01:33:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:00.623 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.623 01:33:26 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:00.623 01:33:26 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:28:00.624 01:33:26 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:00.624 01:33:26 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.624 01:33:26 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:00.624 01:33:26 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:00.624 01:33:26 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:00.624 01:33:26 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:00.624 01:33:26 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:00.624 01:33:26 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:00.624 01:33:26 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:00.624 01:33:26 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:00.624 rmmod nvme_tcp 00:28:00.624 rmmod nvme_fabrics 00:28:00.624 rmmod nvme_keyring 00:28:00.624 01:33:26 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:00.624 01:33:26 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:00.624 01:33:26 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:00.624 01:33:26 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3553697 ']' 00:28:00.624 01:33:26 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3553697 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3553697 ']' 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3553697 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3553697 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3553697' 00:28:00.624 killing process with pid 3553697 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3553697 00:28:00.624 01:33:26 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3553697 00:28:02.522 01:33:28 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:02.522 01:33:28 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:02.522 01:33:28 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:02.522 01:33:28 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:02.522 01:33:28 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:02.522 01:33:28 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.522 01:33:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:02.522 01:33:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.050 01:33:30 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:05.050 00:28:05.050 real 0m22.955s 00:28:05.050 user 0m32.998s 00:28:05.050 sys 0m4.529s 00:28:05.050 01:33:30 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:05.050 01:33:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:05.050 ************************************ 00:28:05.050 END TEST nvmf_identify_passthru 00:28:05.050 ************************************ 00:28:05.050 01:33:30 -- common/autotest_common.sh@1142 -- # return 0 00:28:05.050 01:33:30 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:05.050 01:33:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:05.050 01:33:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.050 01:33:30 -- common/autotest_common.sh@10 -- # set +x 00:28:05.050 ************************************ 00:28:05.050 START TEST nvmf_dif 00:28:05.050 ************************************ 00:28:05.050 01:33:30 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:05.050 * Looking for test storage... 00:28:05.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:05.050 01:33:30 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:05.050 01:33:30 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.050 01:33:30 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.050 01:33:30 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.050 01:33:30 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.050 01:33:30 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.050 01:33:30 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.050 01:33:30 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:05.050 01:33:30 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:05.050 01:33:30 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:05.050 01:33:30 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:05.050 01:33:30 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:05.050 01:33:30 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:05.050 01:33:30 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.050 01:33:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:05.050 01:33:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:05.050 01:33:30 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:05.050 01:33:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:10.402 01:33:35 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:10.403 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:10.403 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:10.403 Found net devices under 0000:86:00.0: cvl_0_0 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:10.403 Found net devices under 0000:86:00.1: cvl_0_1 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:10.403 01:33:35 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.403 01:33:36 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.403 01:33:36 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.403 01:33:36 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:10.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:28:10.403 00:28:10.403 --- 10.0.0.2 ping statistics --- 00:28:10.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.403 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:28:10.403 01:33:36 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:28:10.403 00:28:10.403 --- 10.0.0.1 ping statistics --- 00:28:10.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.403 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:28:10.403 01:33:36 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.403 01:33:36 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:10.403 01:33:36 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:10.403 01:33:36 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:12.926 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:12.926 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:12.926 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:12.926 01:33:38 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.926 01:33:38 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:12.926 01:33:38 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:12.926 01:33:38 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.926 01:33:38 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:12.926 01:33:38 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:12.926 01:33:38 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:12.926 01:33:38 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:12.926 01:33:38 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:12.926 01:33:38 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:12.926 01:33:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:12.926 01:33:38 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3559378 00:28:12.926 01:33:38 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3559378 00:28:12.926 01:33:38 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:12.926 01:33:38 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3559378 ']' 00:28:12.926 01:33:38 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.926 01:33:38 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.926 01:33:38 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.926 01:33:38 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.926 01:33:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:12.926 [2024-07-16 01:33:38.641304] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:28:12.926 [2024-07-16 01:33:38.641355] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.926 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.926 [2024-07-16 01:33:38.698960] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.926 [2024-07-16 01:33:38.776723] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.926 [2024-07-16 01:33:38.776758] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.926 [2024-07-16 01:33:38.776765] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.926 [2024-07-16 01:33:38.776771] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.926 [2024-07-16 01:33:38.776778] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.926 [2024-07-16 01:33:38.776796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.487 01:33:39 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:13.487 01:33:39 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:13.487 01:33:39 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:13.487 01:33:39 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:13.487 01:33:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:13.487 01:33:39 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.487 01:33:39 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:13.487 01:33:39 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:13.487 01:33:39 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.487 01:33:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:13.487 [2024-07-16 01:33:39.475055] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.745 01:33:39 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.745 01:33:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:13.745 01:33:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:13.745 01:33:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:13.745 01:33:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:13.745 ************************************ 00:28:13.745 START TEST fio_dif_1_default 00:28:13.745 ************************************ 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:13.745 bdev_null0 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:13.745 [2024-07-16 01:33:39.547349] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:13.745 { 00:28:13.745 "params": { 00:28:13.745 "name": "Nvme$subsystem", 00:28:13.745 "trtype": "$TEST_TRANSPORT", 00:28:13.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:13.745 "adrfam": "ipv4", 00:28:13.745 "trsvcid": "$NVMF_PORT", 00:28:13.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:13.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:13.745 "hdgst": ${hdgst:-false}, 00:28:13.745 "ddgst": ${ddgst:-false} 00:28:13.745 }, 00:28:13.745 "method": "bdev_nvme_attach_controller" 00:28:13.745 } 00:28:13.745 EOF 00:28:13.745 )") 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:13.745 "params": { 00:28:13.745 "name": "Nvme0", 00:28:13.745 "trtype": "tcp", 00:28:13.745 "traddr": "10.0.0.2", 00:28:13.745 "adrfam": "ipv4", 00:28:13.745 "trsvcid": "4420", 00:28:13.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:13.745 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:13.745 "hdgst": false, 00:28:13.745 "ddgst": false 00:28:13.745 }, 00:28:13.745 "method": "bdev_nvme_attach_controller" 00:28:13.745 }' 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:13.745 01:33:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.002 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:14.002 fio-3.35 00:28:14.002 Starting 1 thread 00:28:14.002 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.186 00:28:26.186 filename0: (groupid=0, jobs=1): err= 0: pid=3559759: Tue Jul 16 01:33:50 2024 00:28:26.186 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 00:28:26.186 slat (nsec): min=4654, max=21546, avg=6023.64, stdev=940.01 00:28:26.186 clat (usec): min=40779, max=48394, avg=41026.21, stdev=486.83 00:28:26.186 lat (usec): min=40785, max=48408, avg=41032.23, stdev=486.90 00:28:26.186 clat percentiles (usec): 00:28:26.186 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:26.186 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:26.186 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:26.186 | 99.00th=[42206], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:28:26.186 | 99.99th=[48497] 00:28:26.186 bw ( KiB/s): min= 383, max= 416, per=99.53%, avg=388.75, stdev=11.75, samples=20 00:28:26.186 iops : min= 95, max= 104, avg=97.15, stdev= 2.96, samples=20 00:28:26.186 lat (msec) : 50=100.00% 00:28:26.186 cpu : usr=94.30%, sys=5.46%, ctx=13, majf=0, minf=224 00:28:26.186 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:26.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:26.186 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:26.186 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:26.186 00:28:26.186 Run status group 0 (all jobs): 00:28:26.186 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10015-10015msec 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.186 00:28:26.186 real 0m11.147s 00:28:26.186 user 0m16.008s 00:28:26.186 sys 0m0.791s 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 ************************************ 00:28:26.186 END TEST fio_dif_1_default 00:28:26.186 ************************************ 00:28:26.186 01:33:50 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:26.186 01:33:50 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:26.186 01:33:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:26.186 01:33:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 ************************************ 00:28:26.186 START TEST fio_dif_1_multi_subsystems 00:28:26.186 ************************************ 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 bdev_null0 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 [2024-07-16 01:33:50.762930] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 bdev_null1 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.186 { 00:28:26.186 "params": { 00:28:26.186 "name": "Nvme$subsystem", 00:28:26.186 "trtype": "$TEST_TRANSPORT", 00:28:26.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.186 "adrfam": "ipv4", 00:28:26.186 "trsvcid": "$NVMF_PORT", 00:28:26.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.186 "hdgst": ${hdgst:-false}, 00:28:26.186 "ddgst": ${ddgst:-false} 00:28:26.186 }, 00:28:26.186 "method": "bdev_nvme_attach_controller" 00:28:26.186 } 00:28:26.186 EOF 00:28:26.186 )") 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:26.186 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:26.186 { 00:28:26.186 "params": { 00:28:26.186 "name": "Nvme$subsystem", 00:28:26.186 "trtype": "$TEST_TRANSPORT", 00:28:26.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.186 "adrfam": "ipv4", 00:28:26.186 "trsvcid": "$NVMF_PORT", 00:28:26.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.186 "hdgst": ${hdgst:-false}, 00:28:26.186 "ddgst": ${ddgst:-false} 00:28:26.186 }, 00:28:26.186 "method": "bdev_nvme_attach_controller" 00:28:26.186 } 00:28:26.187 EOF 00:28:26.187 )") 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:26.187 "params": { 00:28:26.187 "name": "Nvme0", 00:28:26.187 "trtype": "tcp", 00:28:26.187 "traddr": "10.0.0.2", 00:28:26.187 "adrfam": "ipv4", 00:28:26.187 "trsvcid": "4420", 00:28:26.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:26.187 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:26.187 "hdgst": false, 00:28:26.187 "ddgst": false 00:28:26.187 }, 00:28:26.187 "method": "bdev_nvme_attach_controller" 00:28:26.187 },{ 00:28:26.187 "params": { 00:28:26.187 "name": "Nvme1", 00:28:26.187 "trtype": "tcp", 00:28:26.187 "traddr": "10.0.0.2", 00:28:26.187 "adrfam": "ipv4", 00:28:26.187 "trsvcid": "4420", 00:28:26.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:26.187 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:26.187 "hdgst": false, 00:28:26.187 "ddgst": false 00:28:26.187 }, 00:28:26.187 "method": "bdev_nvme_attach_controller" 00:28:26.187 }' 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:26.187 01:33:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.187 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:26.187 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:26.187 fio-3.35 00:28:26.187 Starting 2 threads 00:28:26.187 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.146 00:28:36.146 filename0: (groupid=0, jobs=1): err= 0: pid=3561726: Tue Jul 16 01:34:01 2024 00:28:36.146 read: IOPS=142, BW=571KiB/s (584kB/s)(5712KiB/10007msec) 00:28:36.146 slat (nsec): min=5794, max=25395, avg=7555.51, stdev=2513.71 00:28:36.146 clat (usec): min=409, max=44210, avg=28006.99, stdev=19013.31 00:28:36.146 lat (usec): min=415, max=44235, avg=28014.55, stdev=19012.75 00:28:36.146 clat percentiles (usec): 00:28:36.146 | 1.00th=[ 424], 5.00th=[ 478], 10.00th=[ 586], 20.00th=[ 619], 00:28:36.146 | 30.00th=[ 947], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:36.146 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:28:36.146 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:28:36.146 | 99.99th=[44303] 00:28:36.146 bw ( KiB/s): min= 384, max= 768, per=42.85%, avg=569.60, stdev=181.79, samples=20 00:28:36.146 iops : min= 96, max= 192, avg=142.40, stdev=45.45, samples=20 00:28:36.146 lat (usec) : 500=8.33%, 750=19.89%, 1000=3.29% 00:28:36.146 lat (msec) : 2=0.98%, 50=67.51% 00:28:36.146 cpu : usr=97.67%, sys=2.07%, ctx=9, majf=0, minf=115 00:28:36.146 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:36.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.146 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:36.146 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:36.146 filename1: (groupid=0, jobs=1): err= 0: pid=3561727: Tue Jul 16 01:34:01 2024 00:28:36.146 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10037msec) 00:28:36.146 slat (nsec): min=5794, max=25910, avg=7270.67, stdev=2337.98 00:28:36.146 clat (usec): min=392, max=43687, avg=21064.85, stdev=20491.72 00:28:36.146 lat (usec): min=398, max=43713, avg=21072.12, stdev=20490.99 00:28:36.146 clat percentiles (usec): 00:28:36.146 | 1.00th=[ 412], 5.00th=[ 420], 10.00th=[ 429], 20.00th=[ 437], 00:28:36.146 | 30.00th=[ 453], 40.00th=[ 586], 50.00th=[40633], 60.00th=[41157], 00:28:36.146 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:28:36.146 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:28:36.146 | 99.99th=[43779] 00:28:36.146 bw ( KiB/s): min= 672, max= 768, per=57.23%, avg=760.00, stdev=25.16, samples=20 00:28:36.146 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:28:36.146 lat (usec) : 500=37.24%, 750=10.77%, 1000=1.37% 00:28:36.146 lat (msec) : 2=0.42%, 50=50.21% 00:28:36.146 cpu : usr=97.84%, sys=1.91%, ctx=9, majf=0, minf=133 00:28:36.146 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:36.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.146 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:36.146 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:36.146 00:28:36.146 Run status group 0 (all jobs): 00:28:36.146 READ: bw=1328KiB/s (1360kB/s), 571KiB/s-759KiB/s (584kB/s-777kB/s), io=13.0MiB (13.6MB), run=10007-10037msec 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.146 01:34:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.146 01:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.146 00:28:36.146 real 0m11.270s 00:28:36.146 user 0m26.583s 00:28:36.146 sys 0m0.715s 00:28:36.146 01:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:36.146 01:34:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.146 ************************************ 00:28:36.146 END TEST fio_dif_1_multi_subsystems 00:28:36.146 ************************************ 00:28:36.146 01:34:02 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:36.146 01:34:02 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:36.146 01:34:02 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:36.146 01:34:02 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:36.146 01:34:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:36.146 ************************************ 00:28:36.146 START TEST fio_dif_rand_params 00:28:36.146 ************************************ 00:28:36.146 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:36.146 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:36.146 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:36.146 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:36.146 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:36.146 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:36.147 bdev_null0 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:36.147 [2024-07-16 01:34:02.105366] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.147 { 00:28:36.147 "params": { 00:28:36.147 "name": "Nvme$subsystem", 00:28:36.147 "trtype": "$TEST_TRANSPORT", 00:28:36.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.147 "adrfam": "ipv4", 00:28:36.147 "trsvcid": "$NVMF_PORT", 00:28:36.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.147 "hdgst": ${hdgst:-false}, 00:28:36.147 "ddgst": ${ddgst:-false} 00:28:36.147 }, 00:28:36.147 "method": "bdev_nvme_attach_controller" 00:28:36.147 } 00:28:36.147 EOF 00:28:36.147 )") 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:36.147 01:34:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:36.147 "params": { 00:28:36.147 "name": "Nvme0", 00:28:36.147 "trtype": "tcp", 00:28:36.147 "traddr": "10.0.0.2", 00:28:36.147 "adrfam": "ipv4", 00:28:36.147 "trsvcid": "4420", 00:28:36.147 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:36.147 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:36.147 "hdgst": false, 00:28:36.147 "ddgst": false 00:28:36.147 }, 00:28:36.147 "method": "bdev_nvme_attach_controller" 00:28:36.147 }' 00:28:36.433 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:36.433 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:36.433 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.433 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:36.433 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.433 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:36.433 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:36.433 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:36.433 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:36.433 01:34:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.692 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:36.692 ... 00:28:36.692 fio-3.35 00:28:36.692 Starting 3 threads 00:28:36.692 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.234 00:28:43.234 filename0: (groupid=0, jobs=1): err= 0: pid=3563701: Tue Jul 16 01:34:08 2024 00:28:43.234 read: IOPS=310, BW=38.8MiB/s (40.7MB/s)(194MiB/5006msec) 00:28:43.234 slat (nsec): min=6068, max=29749, avg=11066.02, stdev=2790.11 00:28:43.234 clat (usec): min=3032, max=89300, avg=9656.24, stdev=8047.93 00:28:43.234 lat (usec): min=3046, max=89312, avg=9667.31, stdev=8047.91 00:28:43.234 clat percentiles (usec): 00:28:43.234 | 1.00th=[ 3752], 5.00th=[ 5145], 10.00th=[ 5866], 20.00th=[ 6521], 00:28:43.234 | 30.00th=[ 7373], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:28:43.234 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[11338], 00:28:43.234 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51119], 99.95th=[89654], 00:28:43.234 | 99.99th=[89654] 00:28:43.234 bw ( KiB/s): min=27136, max=48896, per=32.70%, avg=39705.60, stdev=7140.98, samples=10 00:28:43.234 iops : min= 212, max= 382, avg=310.20, stdev=55.79, samples=10 00:28:43.234 lat (msec) : 4=2.38%, 10=82.94%, 20=10.88%, 50=3.16%, 100=0.64% 00:28:43.234 cpu : usr=95.98%, sys=3.70%, ctx=16, majf=0, minf=67 00:28:43.234 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:43.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.234 issued rwts: total=1553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.234 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:43.234 filename0: (groupid=0, jobs=1): err= 0: pid=3563702: Tue Jul 16 01:34:08 2024 00:28:43.234 read: IOPS=304, BW=38.1MiB/s (39.9MB/s)(191MiB/5004msec) 00:28:43.234 slat (nsec): min=6015, max=38091, avg=11011.12, stdev=2943.68 00:28:43.234 clat (usec): min=3604, max=51203, avg=9827.81, stdev=8117.29 00:28:43.234 lat (usec): min=3611, max=51214, avg=9838.82, stdev=8117.35 00:28:43.234 clat percentiles (usec): 00:28:43.234 | 1.00th=[ 4146], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6652], 00:28:43.234 | 30.00th=[ 7504], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:28:43.234 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10814], 95.00th=[11600], 00:28:43.234 | 99.00th=[49021], 99.50th=[50070], 99.90th=[50594], 99.95th=[51119], 00:28:43.234 | 99.99th=[51119] 00:28:43.234 bw ( KiB/s): min=30464, max=48896, per=32.70%, avg=39708.44, stdev=6245.09, samples=9 00:28:43.234 iops : min= 238, max= 382, avg=310.22, stdev=48.79, samples=9 00:28:43.234 lat (msec) : 4=0.52%, 10=82.36%, 20=12.98%, 50=3.67%, 100=0.46% 00:28:43.234 cpu : usr=96.30%, sys=3.36%, ctx=11, majf=0, minf=134 00:28:43.234 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:43.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.234 issued rwts: total=1525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.234 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:43.234 filename0: (groupid=0, jobs=1): err= 0: pid=3563703: Tue Jul 16 01:34:08 2024 00:28:43.234 read: IOPS=334, BW=41.8MiB/s (43.8MB/s)(209MiB/5002msec) 00:28:43.234 slat (nsec): min=6071, max=40055, avg=11613.78, stdev=3451.43 00:28:43.234 clat (usec): min=2985, max=51744, avg=8965.70, stdev=6355.07 00:28:43.234 lat (usec): min=2993, max=51755, avg=8977.31, stdev=6355.41 00:28:43.234 clat percentiles (usec): 00:28:43.235 | 1.00th=[ 3425], 5.00th=[ 3621], 10.00th=[ 4948], 20.00th=[ 5997], 00:28:43.235 | 30.00th=[ 6652], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 8979], 00:28:43.235 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11338], 95.00th=[11994], 00:28:43.235 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:28:43.235 | 99.99th=[51643] 00:28:43.235 bw ( KiB/s): min=32256, max=58368, per=34.95%, avg=42439.11, stdev=7101.15, samples=9 00:28:43.235 iops : min= 252, max= 456, avg=331.56, stdev=55.48, samples=9 00:28:43.235 lat (msec) : 4=8.08%, 10=68.28%, 20=21.48%, 50=1.68%, 100=0.48% 00:28:43.235 cpu : usr=92.42%, sys=5.62%, ctx=460, majf=0, minf=95 00:28:43.235 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:43.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.235 issued rwts: total=1671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.235 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:43.235 00:28:43.235 Run status group 0 (all jobs): 00:28:43.235 READ: bw=119MiB/s (124MB/s), 38.1MiB/s-41.8MiB/s (39.9MB/s-43.8MB/s), io=594MiB (622MB), run=5002-5006msec 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 bdev_null0 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 [2024-07-16 01:34:08.372026] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 bdev_null1 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 bdev_null2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:43.235 { 00:28:43.235 "params": { 00:28:43.235 "name": "Nvme$subsystem", 00:28:43.235 "trtype": "$TEST_TRANSPORT", 00:28:43.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.235 "adrfam": "ipv4", 00:28:43.235 "trsvcid": "$NVMF_PORT", 00:28:43.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.235 "hdgst": ${hdgst:-false}, 00:28:43.235 "ddgst": ${ddgst:-false} 00:28:43.235 }, 00:28:43.235 "method": "bdev_nvme_attach_controller" 00:28:43.235 } 00:28:43.235 EOF 00:28:43.235 )") 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:43.235 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:43.236 { 00:28:43.236 "params": { 00:28:43.236 "name": "Nvme$subsystem", 00:28:43.236 "trtype": "$TEST_TRANSPORT", 00:28:43.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.236 "adrfam": "ipv4", 00:28:43.236 "trsvcid": "$NVMF_PORT", 00:28:43.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.236 "hdgst": ${hdgst:-false}, 00:28:43.236 "ddgst": ${ddgst:-false} 00:28:43.236 }, 00:28:43.236 "method": "bdev_nvme_attach_controller" 00:28:43.236 } 00:28:43.236 EOF 00:28:43.236 )") 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:43.236 { 00:28:43.236 "params": { 00:28:43.236 "name": "Nvme$subsystem", 00:28:43.236 "trtype": "$TEST_TRANSPORT", 00:28:43.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.236 "adrfam": "ipv4", 00:28:43.236 "trsvcid": "$NVMF_PORT", 00:28:43.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.236 "hdgst": ${hdgst:-false}, 00:28:43.236 "ddgst": ${ddgst:-false} 00:28:43.236 }, 00:28:43.236 "method": "bdev_nvme_attach_controller" 00:28:43.236 } 00:28:43.236 EOF 00:28:43.236 )") 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:43.236 "params": { 00:28:43.236 "name": "Nvme0", 00:28:43.236 "trtype": "tcp", 00:28:43.236 "traddr": "10.0.0.2", 00:28:43.236 "adrfam": "ipv4", 00:28:43.236 "trsvcid": "4420", 00:28:43.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:43.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:43.236 "hdgst": false, 00:28:43.236 "ddgst": false 00:28:43.236 }, 00:28:43.236 "method": "bdev_nvme_attach_controller" 00:28:43.236 },{ 00:28:43.236 "params": { 00:28:43.236 "name": "Nvme1", 00:28:43.236 "trtype": "tcp", 00:28:43.236 "traddr": "10.0.0.2", 00:28:43.236 "adrfam": "ipv4", 00:28:43.236 "trsvcid": "4420", 00:28:43.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:43.236 "hdgst": false, 00:28:43.236 "ddgst": false 00:28:43.236 }, 00:28:43.236 "method": "bdev_nvme_attach_controller" 00:28:43.236 },{ 00:28:43.236 "params": { 00:28:43.236 "name": "Nvme2", 00:28:43.236 "trtype": "tcp", 00:28:43.236 "traddr": "10.0.0.2", 00:28:43.236 "adrfam": "ipv4", 00:28:43.236 "trsvcid": "4420", 00:28:43.236 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:43.236 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:43.236 "hdgst": false, 00:28:43.236 "ddgst": false 00:28:43.236 }, 00:28:43.236 "method": "bdev_nvme_attach_controller" 00:28:43.236 }' 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:43.236 01:34:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.236 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:43.236 ... 00:28:43.236 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:43.236 ... 00:28:43.236 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:43.236 ... 00:28:43.236 fio-3.35 00:28:43.236 Starting 24 threads 00:28:43.236 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.461 00:28:55.461 filename0: (groupid=0, jobs=1): err= 0: pid=3564972: Tue Jul 16 01:34:19 2024 00:28:55.461 read: IOPS=539, BW=2158KiB/s (2210kB/s)(21.1MiB/10022msec) 00:28:55.461 slat (nsec): min=7566, max=73586, avg=22061.85, stdev=10641.76 00:28:55.461 clat (usec): min=7954, max=31143, avg=29473.24, stdev=2056.09 00:28:55.461 lat (usec): min=7964, max=31157, avg=29495.31, stdev=2056.20 00:28:55.461 clat percentiles (usec): 00:28:55.461 | 1.00th=[19530], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:28:55.461 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:28:55.461 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:28:55.461 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:28:55.461 | 99.99th=[31065] 00:28:55.461 bw ( KiB/s): min= 2048, max= 2432, per=4.23%, avg=2156.55, stdev=85.81, samples=20 00:28:55.461 iops : min= 512, max= 608, avg=539.10, stdev=21.45, samples=20 00:28:55.461 lat (msec) : 10=0.59%, 20=0.59%, 50=98.82% 00:28:55.461 cpu : usr=98.72%, sys=0.87%, ctx=10, majf=0, minf=52 00:28:55.461 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.461 filename0: (groupid=0, jobs=1): err= 0: pid=3564973: Tue Jul 16 01:34:19 2024 00:28:55.461 read: IOPS=529, BW=2119KiB/s (2170kB/s)(20.9MiB/10089msec) 00:28:55.461 slat (nsec): min=8596, max=74858, avg=32845.30, stdev=14506.41 00:28:55.461 clat (msec): min=28, max=113, avg=29.93, stdev= 4.72 00:28:55.461 lat (msec): min=28, max=113, avg=29.96, stdev= 4.72 00:28:55.461 clat percentiles (msec): 00:28:55.461 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.461 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.461 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.461 | 99.00th=[ 31], 99.50th=[ 53], 99.90th=[ 113], 99.95th=[ 113], 00:28:55.461 | 99.99th=[ 113] 00:28:55.461 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2130.70, stdev=74.86, samples=20 00:28:55.461 iops : min= 480, max= 544, avg=532.60, stdev=18.67, samples=20 00:28:55.461 lat (msec) : 50=99.40%, 100=0.30%, 250=0.30% 00:28:55.461 cpu : usr=98.69%, sys=0.89%, ctx=15, majf=0, minf=49 00:28:55.461 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.461 filename0: (groupid=0, jobs=1): err= 0: pid=3564974: Tue Jul 16 01:34:19 2024 00:28:55.461 read: IOPS=530, BW=2122KiB/s (2173kB/s)(21.0MiB/10113msec) 00:28:55.461 slat (nsec): min=7478, max=71467, avg=20789.70, stdev=8740.93 00:28:55.461 clat (msec): min=9, max=114, avg=29.96, stdev= 4.50 00:28:55.461 lat (msec): min=9, max=114, avg=29.98, stdev= 4.50 00:28:55.461 clat percentiles (msec): 00:28:55.461 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.461 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.461 | 70.00th=[ 30], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.461 | 99.00th=[ 33], 99.50th=[ 47], 99.90th=[ 115], 99.95th=[ 115], 00:28:55.461 | 99.99th=[ 115] 00:28:55.461 bw ( KiB/s): min= 2048, max= 2180, per=4.20%, avg=2141.80, stdev=57.77, samples=20 00:28:55.461 iops : min= 512, max= 545, avg=535.45, stdev=14.44, samples=20 00:28:55.461 lat (msec) : 10=0.11%, 50=99.61%, 100=0.02%, 250=0.26% 00:28:55.461 cpu : usr=98.85%, sys=0.75%, ctx=13, majf=0, minf=46 00:28:55.461 IO depths : 1=5.9%, 2=12.0%, 4=24.6%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:28:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 issued rwts: total=5364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.461 filename0: (groupid=0, jobs=1): err= 0: pid=3564975: Tue Jul 16 01:34:19 2024 00:28:55.461 read: IOPS=535, BW=2143KiB/s (2195kB/s)(21.1MiB/10075msec) 00:28:55.461 slat (nsec): min=7267, max=58613, avg=17430.33, stdev=9253.88 00:28:55.461 clat (usec): min=10070, max=89273, avg=29718.79, stdev=3759.53 00:28:55.461 lat (usec): min=10081, max=89287, avg=29736.22, stdev=3759.72 00:28:55.461 clat percentiles (usec): 00:28:55.461 | 1.00th=[21103], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:28:55.461 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:28:55.461 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:28:55.461 | 99.00th=[31851], 99.50th=[47449], 99.90th=[89654], 99.95th=[89654], 00:28:55.461 | 99.99th=[89654] 00:28:55.461 bw ( KiB/s): min= 2048, max= 2608, per=4.23%, avg=2154.60, stdev=123.16, samples=20 00:28:55.461 iops : min= 512, max= 652, avg=538.65, stdev=30.79, samples=20 00:28:55.461 lat (msec) : 20=0.89%, 50=98.81%, 100=0.30% 00:28:55.461 cpu : usr=98.70%, sys=0.89%, ctx=16, majf=0, minf=53 00:28:55.461 IO depths : 1=5.9%, 2=11.9%, 4=24.4%, 8=51.2%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 issued rwts: total=5398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.461 filename0: (groupid=0, jobs=1): err= 0: pid=3564976: Tue Jul 16 01:34:19 2024 00:28:55.461 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10002msec) 00:28:55.461 slat (nsec): min=8491, max=73101, avg=27586.40, stdev=12322.22 00:28:55.461 clat (usec): min=14727, max=64906, avg=29724.03, stdev=1317.94 00:28:55.461 lat (usec): min=14736, max=64934, avg=29751.62, stdev=1316.87 00:28:55.461 clat percentiles (usec): 00:28:55.461 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:28:55.461 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:28:55.461 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:28:55.461 | 99.00th=[30802], 99.50th=[31065], 99.90th=[50070], 99.95th=[50070], 00:28:55.461 | 99.99th=[64750] 00:28:55.461 bw ( KiB/s): min= 2048, max= 2180, per=4.19%, avg=2137.26, stdev=60.94, samples=19 00:28:55.461 iops : min= 512, max= 545, avg=534.32, stdev=15.24, samples=19 00:28:55.461 lat (msec) : 20=0.04%, 50=99.70%, 100=0.26% 00:28:55.461 cpu : usr=98.64%, sys=0.96%, ctx=14, majf=0, minf=33 00:28:55.461 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.461 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.461 filename0: (groupid=0, jobs=1): err= 0: pid=3564977: Tue Jul 16 01:34:19 2024 00:28:55.461 read: IOPS=539, BW=2158KiB/s (2209kB/s)(21.1MiB/10026msec) 00:28:55.461 slat (nsec): min=4717, max=70335, avg=21488.31, stdev=10060.98 00:28:55.461 clat (usec): min=5841, max=44154, avg=29489.91, stdev=2031.97 00:28:55.461 lat (usec): min=5849, max=44179, avg=29511.40, stdev=2032.70 00:28:55.461 clat percentiles (usec): 00:28:55.461 | 1.00th=[19792], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:28:55.461 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:28:55.461 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:28:55.461 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:28:55.461 | 99.99th=[44303] 00:28:55.461 bw ( KiB/s): min= 2048, max= 2432, per=4.23%, avg=2156.30, stdev=85.76, samples=20 00:28:55.461 iops : min= 512, max= 608, avg=539.00, stdev=21.43, samples=20 00:28:55.461 lat (msec) : 10=0.43%, 20=0.80%, 50=98.78% 00:28:55.461 cpu : usr=98.67%, sys=0.93%, ctx=12, majf=0, minf=34 00:28:55.461 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:55.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.461 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.462 filename0: (groupid=0, jobs=1): err= 0: pid=3564978: Tue Jul 16 01:34:19 2024 00:28:55.462 read: IOPS=530, BW=2120KiB/s (2171kB/s)(20.9MiB/10113msec) 00:28:55.462 slat (nsec): min=8437, max=68414, avg=24977.37, stdev=7959.33 00:28:55.462 clat (msec): min=25, max=114, avg=29.99, stdev= 4.76 00:28:55.462 lat (msec): min=25, max=114, avg=30.01, stdev= 4.76 00:28:55.462 clat percentiles (msec): 00:28:55.462 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.462 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.462 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.462 | 99.00th=[ 32], 99.50th=[ 48], 99.90th=[ 115], 99.95th=[ 115], 00:28:55.462 | 99.99th=[ 115] 00:28:55.462 bw ( KiB/s): min= 2048, max= 2180, per=4.20%, avg=2139.40, stdev=60.53, samples=20 00:28:55.462 iops : min= 512, max= 545, avg=534.85, stdev=15.13, samples=20 00:28:55.462 lat (msec) : 50=99.66%, 100=0.04%, 250=0.30% 00:28:55.462 cpu : usr=98.79%, sys=0.80%, ctx=19, majf=0, minf=43 00:28:55.462 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.462 filename0: (groupid=0, jobs=1): err= 0: pid=3564979: Tue Jul 16 01:34:19 2024 00:28:55.462 read: IOPS=529, BW=2119KiB/s (2169kB/s)(20.9MiB/10090msec) 00:28:55.462 slat (nsec): min=4167, max=76059, avg=30737.84, stdev=11193.87 00:28:55.462 clat (msec): min=26, max=114, avg=29.94, stdev= 4.82 00:28:55.462 lat (msec): min=26, max=114, avg=29.97, stdev= 4.82 00:28:55.462 clat percentiles (msec): 00:28:55.462 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.462 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.462 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.462 | 99.00th=[ 32], 99.50th=[ 53], 99.90th=[ 115], 99.95th=[ 115], 00:28:55.462 | 99.99th=[ 115] 00:28:55.462 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2130.50, stdev=75.55, samples=20 00:28:55.462 iops : min= 480, max= 544, avg=532.55, stdev=18.92, samples=20 00:28:55.462 lat (msec) : 50=99.40%, 100=0.30%, 250=0.30% 00:28:55.462 cpu : usr=98.70%, sys=0.89%, ctx=15, majf=0, minf=42 00:28:55.462 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=3564980: Tue Jul 16 01:34:19 2024 00:28:55.462 read: IOPS=529, BW=2119KiB/s (2169kB/s)(20.9MiB/10090msec) 00:28:55.462 slat (nsec): min=4528, max=75931, avg=29568.30, stdev=12332.99 00:28:55.462 clat (msec): min=28, max=115, avg=29.93, stdev= 4.82 00:28:55.462 lat (msec): min=28, max=115, avg=29.96, stdev= 4.82 00:28:55.462 clat percentiles (msec): 00:28:55.462 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.462 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.462 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.462 | 99.00th=[ 32], 99.50th=[ 52], 99.90th=[ 115], 99.95th=[ 115], 00:28:55.462 | 99.99th=[ 116] 00:28:55.462 bw ( KiB/s): min= 1923, max= 2176, per=4.18%, avg=2130.65, stdev=75.12, samples=20 00:28:55.462 iops : min= 480, max= 544, avg=532.55, stdev=18.92, samples=20 00:28:55.462 lat (msec) : 50=99.40%, 100=0.30%, 250=0.30% 00:28:55.462 cpu : usr=98.75%, sys=0.84%, ctx=17, majf=0, minf=37 00:28:55.462 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=3564981: Tue Jul 16 01:34:19 2024 00:28:55.462 read: IOPS=529, BW=2119KiB/s (2169kB/s)(20.9MiB/10101msec) 00:28:55.462 slat (nsec): min=6261, max=71958, avg=25522.96, stdev=17227.53 00:28:55.462 clat (msec): min=15, max=113, avg=29.99, stdev= 4.78 00:28:55.462 lat (msec): min=15, max=113, avg=30.01, stdev= 4.78 00:28:55.462 clat percentiles (msec): 00:28:55.462 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.462 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.462 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.462 | 99.00th=[ 32], 99.50th=[ 51], 99.90th=[ 113], 99.95th=[ 113], 00:28:55.462 | 99.99th=[ 114] 00:28:55.462 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2131.55, stdev=76.13, samples=20 00:28:55.462 iops : min= 480, max= 544, avg=532.85, stdev=19.10, samples=20 00:28:55.462 lat (msec) : 20=0.28%, 50=99.12%, 100=0.30%, 250=0.30% 00:28:55.462 cpu : usr=98.73%, sys=0.87%, ctx=12, majf=0, minf=54 00:28:55.462 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 issued rwts: total=5350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=3564982: Tue Jul 16 01:34:19 2024 00:28:55.462 read: IOPS=529, BW=2119KiB/s (2169kB/s)(20.9MiB/10090msec) 00:28:55.462 slat (nsec): min=4435, max=78733, avg=30803.09, stdev=11734.04 00:28:55.462 clat (msec): min=28, max=115, avg=29.94, stdev= 4.83 00:28:55.462 lat (msec): min=28, max=115, avg=29.97, stdev= 4.83 00:28:55.462 clat percentiles (msec): 00:28:55.462 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.462 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.462 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.462 | 99.00th=[ 32], 99.50th=[ 53], 99.90th=[ 115], 99.95th=[ 115], 00:28:55.462 | 99.99th=[ 115] 00:28:55.462 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2130.50, stdev=75.55, samples=20 00:28:55.462 iops : min= 480, max= 544, avg=532.55, stdev=18.92, samples=20 00:28:55.462 lat (msec) : 50=99.40%, 100=0.30%, 250=0.30% 00:28:55.462 cpu : usr=98.89%, sys=0.72%, ctx=10, majf=0, minf=40 00:28:55.462 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=3564983: Tue Jul 16 01:34:19 2024 00:28:55.462 read: IOPS=538, BW=2152KiB/s (2204kB/s)(21.1MiB/10020msec) 00:28:55.462 slat (nsec): min=8036, max=71183, avg=27802.18, stdev=12438.22 00:28:55.462 clat (usec): min=11924, max=31226, avg=29508.58, stdev=1489.09 00:28:55.462 lat (usec): min=11932, max=31240, avg=29536.39, stdev=1489.50 00:28:55.462 clat percentiles (usec): 00:28:55.462 | 1.00th=[21627], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:28:55.462 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:28:55.462 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:28:55.462 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:28:55.462 | 99.99th=[31327] 00:28:55.462 bw ( KiB/s): min= 2043, max= 2304, per=4.22%, avg=2150.15, stdev=67.37, samples=20 00:28:55.462 iops : min= 510, max= 576, avg=537.50, stdev=16.91, samples=20 00:28:55.462 lat (msec) : 20=0.89%, 50=99.11% 00:28:55.462 cpu : usr=98.82%, sys=0.77%, ctx=13, majf=0, minf=41 00:28:55.462 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 issued rwts: total=5392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=3564984: Tue Jul 16 01:34:19 2024 00:28:55.462 read: IOPS=530, BW=2120KiB/s (2171kB/s)(20.9MiB/10113msec) 00:28:55.462 slat (nsec): min=8452, max=73358, avg=27197.48, stdev=9421.96 00:28:55.462 clat (msec): min=26, max=114, avg=29.97, stdev= 4.75 00:28:55.462 lat (msec): min=26, max=114, avg=30.00, stdev= 4.75 00:28:55.462 clat percentiles (msec): 00:28:55.462 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.462 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.462 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.462 | 99.00th=[ 32], 99.50th=[ 48], 99.90th=[ 115], 99.95th=[ 115], 00:28:55.462 | 99.99th=[ 115] 00:28:55.462 bw ( KiB/s): min= 2048, max= 2180, per=4.20%, avg=2139.40, stdev=60.53, samples=20 00:28:55.462 iops : min= 512, max= 545, avg=534.85, stdev=15.13, samples=20 00:28:55.462 lat (msec) : 50=99.70%, 250=0.30% 00:28:55.462 cpu : usr=98.49%, sys=1.11%, ctx=16, majf=0, minf=37 00:28:55.462 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=3564985: Tue Jul 16 01:34:19 2024 00:28:55.462 read: IOPS=532, BW=2128KiB/s (2179kB/s)(20.9MiB/10074msec) 00:28:55.462 slat (nsec): min=7307, max=58607, avg=12676.27, stdev=6449.00 00:28:55.462 clat (usec): min=16772, max=89006, avg=29962.68, stdev=3450.03 00:28:55.462 lat (usec): min=16782, max=89023, avg=29975.35, stdev=3450.12 00:28:55.462 clat percentiles (usec): 00:28:55.462 | 1.00th=[28967], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:28:55.462 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:28:55.462 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:28:55.462 | 99.00th=[31327], 99.50th=[46924], 99.90th=[88605], 99.95th=[88605], 00:28:55.462 | 99.99th=[88605] 00:28:55.462 bw ( KiB/s): min= 2048, max= 2180, per=4.20%, avg=2139.40, stdev=60.53, samples=20 00:28:55.462 iops : min= 512, max= 545, avg=534.85, stdev=15.13, samples=20 00:28:55.462 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:28:55.462 cpu : usr=98.89%, sys=0.71%, ctx=15, majf=0, minf=47 00:28:55.462 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.462 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.462 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.462 filename1: (groupid=0, jobs=1): err= 0: pid=3564986: Tue Jul 16 01:34:19 2024 00:28:55.462 read: IOPS=529, BW=2119KiB/s (2169kB/s)(20.9MiB/10090msec) 00:28:55.462 slat (nsec): min=4101, max=76600, avg=30629.48, stdev=12154.09 00:28:55.463 clat (msec): min=28, max=115, avg=29.93, stdev= 4.82 00:28:55.463 lat (msec): min=28, max=115, avg=29.96, stdev= 4.82 00:28:55.463 clat percentiles (msec): 00:28:55.463 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.463 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.463 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.463 | 99.00th=[ 32], 99.50th=[ 53], 99.90th=[ 115], 99.95th=[ 115], 00:28:55.463 | 99.99th=[ 115] 00:28:55.463 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2130.50, stdev=75.55, samples=20 00:28:55.463 iops : min= 480, max= 544, avg=532.55, stdev=18.92, samples=20 00:28:55.463 lat (msec) : 50=99.40%, 100=0.30%, 250=0.30% 00:28:55.463 cpu : usr=98.74%, sys=0.87%, ctx=16, majf=0, minf=39 00:28:55.463 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.463 filename1: (groupid=0, jobs=1): err= 0: pid=3564987: Tue Jul 16 01:34:19 2024 00:28:55.463 read: IOPS=530, BW=2120KiB/s (2171kB/s)(20.9MiB/10113msec) 00:28:55.463 slat (nsec): min=8169, max=82315, avg=29219.91, stdev=10068.75 00:28:55.463 clat (msec): min=27, max=114, avg=29.95, stdev= 4.76 00:28:55.463 lat (msec): min=27, max=115, avg=29.98, stdev= 4.75 00:28:55.463 clat percentiles (msec): 00:28:55.463 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.463 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.463 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.463 | 99.00th=[ 32], 99.50th=[ 48], 99.90th=[ 115], 99.95th=[ 115], 00:28:55.463 | 99.99th=[ 115] 00:28:55.463 bw ( KiB/s): min= 2048, max= 2180, per=4.20%, avg=2139.40, stdev=60.53, samples=20 00:28:55.463 iops : min= 512, max= 545, avg=534.85, stdev=15.13, samples=20 00:28:55.463 lat (msec) : 50=99.70%, 250=0.30% 00:28:55.463 cpu : usr=98.96%, sys=0.63%, ctx=15, majf=0, minf=40 00:28:55.463 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.463 filename2: (groupid=0, jobs=1): err= 0: pid=3564988: Tue Jul 16 01:34:19 2024 00:28:55.463 read: IOPS=539, BW=2158KiB/s (2209kB/s)(21.1MiB/10026msec) 00:28:55.463 slat (nsec): min=4965, max=69431, avg=22695.48, stdev=9974.82 00:28:55.463 clat (usec): min=8088, max=31226, avg=29474.66, stdev=1920.33 00:28:55.463 lat (usec): min=8105, max=31240, avg=29497.36, stdev=1920.87 00:28:55.463 clat percentiles (usec): 00:28:55.463 | 1.00th=[19530], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:28:55.463 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:28:55.463 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:28:55.463 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:28:55.463 | 99.99th=[31327] 00:28:55.463 bw ( KiB/s): min= 2048, max= 2432, per=4.23%, avg=2156.30, stdev=85.76, samples=20 00:28:55.463 iops : min= 512, max= 608, avg=539.00, stdev=21.43, samples=20 00:28:55.463 lat (msec) : 10=0.30%, 20=0.89%, 50=98.82% 00:28:55.463 cpu : usr=98.88%, sys=0.67%, ctx=23, majf=0, minf=49 00:28:55.463 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.463 filename2: (groupid=0, jobs=1): err= 0: pid=3564989: Tue Jul 16 01:34:19 2024 00:28:55.463 read: IOPS=529, BW=2120KiB/s (2171kB/s)(20.9MiB/10095msec) 00:28:55.463 slat (nsec): min=7269, max=78203, avg=29115.34, stdev=11153.20 00:28:55.463 clat (msec): min=18, max=115, avg=29.94, stdev= 4.94 00:28:55.463 lat (msec): min=18, max=115, avg=29.97, stdev= 4.94 00:28:55.463 clat percentiles (msec): 00:28:55.463 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.463 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.463 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.463 | 99.00th=[ 41], 99.50th=[ 47], 99.90th=[ 115], 99.95th=[ 115], 00:28:55.463 | 99.99th=[ 115] 00:28:55.463 bw ( KiB/s): min= 2016, max= 2176, per=4.18%, avg=2131.85, stdev=64.04, samples=20 00:28:55.463 iops : min= 504, max= 544, avg=532.85, stdev=16.03, samples=20 00:28:55.463 lat (msec) : 20=0.36%, 50=99.31%, 100=0.04%, 250=0.30% 00:28:55.463 cpu : usr=98.73%, sys=0.87%, ctx=15, majf=0, minf=43 00:28:55.463 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:28:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 issued rwts: total=5350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.463 filename2: (groupid=0, jobs=1): err= 0: pid=3564990: Tue Jul 16 01:34:19 2024 00:28:55.463 read: IOPS=530, BW=2120KiB/s (2171kB/s)(20.9MiB/10113msec) 00:28:55.463 slat (nsec): min=7800, max=77101, avg=25718.99, stdev=9796.37 00:28:55.463 clat (msec): min=25, max=114, avg=29.98, stdev= 4.74 00:28:55.463 lat (msec): min=25, max=114, avg=30.01, stdev= 4.74 00:28:55.463 clat percentiles (msec): 00:28:55.463 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.463 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.463 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.463 | 99.00th=[ 32], 99.50th=[ 48], 99.90th=[ 115], 99.95th=[ 115], 00:28:55.463 | 99.99th=[ 115] 00:28:55.463 bw ( KiB/s): min= 2048, max= 2180, per=4.20%, avg=2139.40, stdev=60.53, samples=20 00:28:55.463 iops : min= 512, max= 545, avg=534.85, stdev=15.13, samples=20 00:28:55.463 lat (msec) : 50=99.68%, 100=0.02%, 250=0.30% 00:28:55.463 cpu : usr=98.81%, sys=0.78%, ctx=16, majf=0, minf=38 00:28:55.463 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.463 filename2: (groupid=0, jobs=1): err= 0: pid=3564991: Tue Jul 16 01:34:19 2024 00:28:55.463 read: IOPS=542, BW=2170KiB/s (2223kB/s)(21.4MiB/10088msec) 00:28:55.463 slat (nsec): min=7314, max=65352, avg=14540.62, stdev=8980.25 00:28:55.463 clat (msec): min=10, max=117, avg=29.38, stdev= 6.29 00:28:55.463 lat (msec): min=10, max=117, avg=29.40, stdev= 6.29 00:28:55.463 clat percentiles (msec): 00:28:55.463 | 1.00th=[ 17], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 29], 00:28:55.463 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.463 | 70.00th=[ 30], 80.00th=[ 31], 90.00th=[ 36], 95.00th=[ 37], 00:28:55.463 | 99.00th=[ 43], 99.50th=[ 68], 99.90th=[ 115], 99.95th=[ 117], 00:28:55.463 | 99.99th=[ 117] 00:28:55.463 bw ( KiB/s): min= 1936, max= 2336, per=4.28%, avg=2182.70, stdev=90.44, samples=20 00:28:55.463 iops : min= 484, max= 584, avg=545.60, stdev=22.61, samples=20 00:28:55.463 lat (msec) : 20=1.95%, 50=97.31%, 100=0.55%, 250=0.18% 00:28:55.463 cpu : usr=98.78%, sys=0.81%, ctx=14, majf=0, minf=99 00:28:55.463 IO depths : 1=0.1%, 2=0.2%, 4=2.7%, 8=80.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 complete : 0=0.0%, 4=89.0%, 8=9.1%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 issued rwts: total=5474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.463 filename2: (groupid=0, jobs=1): err= 0: pid=3564992: Tue Jul 16 01:34:19 2024 00:28:55.463 read: IOPS=539, BW=2159KiB/s (2211kB/s)(21.1MiB/10020msec) 00:28:55.463 slat (nsec): min=5034, max=76749, avg=31904.31, stdev=14264.01 00:28:55.463 clat (usec): min=5552, max=31148, avg=29381.28, stdev=2147.75 00:28:55.463 lat (usec): min=5561, max=31176, avg=29413.19, stdev=2149.24 00:28:55.463 clat percentiles (usec): 00:28:55.463 | 1.00th=[19792], 5.00th=[29230], 10.00th=[29230], 20.00th=[29230], 00:28:55.463 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:28:55.463 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30278], 00:28:55.463 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:28:55.463 | 99.99th=[31065] 00:28:55.463 bw ( KiB/s): min= 2043, max= 2432, per=4.23%, avg=2156.55, stdev=86.21, samples=20 00:28:55.463 iops : min= 510, max= 608, avg=539.10, stdev=21.60, samples=20 00:28:55.463 lat (msec) : 10=0.76%, 20=0.43%, 50=98.82% 00:28:55.463 cpu : usr=98.80%, sys=0.78%, ctx=20, majf=0, minf=45 00:28:55.463 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.463 filename2: (groupid=0, jobs=1): err= 0: pid=3564993: Tue Jul 16 01:34:19 2024 00:28:55.463 read: IOPS=529, BW=2119KiB/s (2169kB/s)(20.9MiB/10090msec) 00:28:55.463 slat (nsec): min=4497, max=88032, avg=33636.01, stdev=13911.33 00:28:55.463 clat (msec): min=28, max=112, avg=29.89, stdev= 4.70 00:28:55.463 lat (msec): min=28, max=112, avg=29.92, stdev= 4.70 00:28:55.463 clat percentiles (msec): 00:28:55.463 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.463 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.463 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.463 | 99.00th=[ 32], 99.50th=[ 52], 99.90th=[ 113], 99.95th=[ 113], 00:28:55.463 | 99.99th=[ 113] 00:28:55.463 bw ( KiB/s): min= 1916, max= 2176, per=4.18%, avg=2131.00, stdev=75.75, samples=20 00:28:55.463 iops : min= 479, max= 544, avg=532.75, stdev=18.94, samples=20 00:28:55.463 lat (msec) : 50=99.40%, 100=0.30%, 250=0.30% 00:28:55.463 cpu : usr=98.56%, sys=0.85%, ctx=49, majf=0, minf=60 00:28:55.463 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.463 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.463 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.463 filename2: (groupid=0, jobs=1): err= 0: pid=3564994: Tue Jul 16 01:34:19 2024 00:28:55.463 read: IOPS=529, BW=2119KiB/s (2170kB/s)(20.9MiB/10087msec) 00:28:55.463 slat (nsec): min=10108, max=76294, avg=31207.10, stdev=12293.35 00:28:55.463 clat (msec): min=28, max=115, avg=29.92, stdev= 4.79 00:28:55.463 lat (msec): min=28, max=115, avg=29.95, stdev= 4.79 00:28:55.463 clat percentiles (msec): 00:28:55.463 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.463 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.464 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.464 | 99.00th=[ 32], 99.50th=[ 50], 99.90th=[ 115], 99.95th=[ 115], 00:28:55.464 | 99.99th=[ 115] 00:28:55.464 bw ( KiB/s): min= 1923, max= 2176, per=4.18%, avg=2131.35, stdev=74.71, samples=20 00:28:55.464 iops : min= 480, max= 544, avg=532.80, stdev=18.79, samples=20 00:28:55.464 lat (msec) : 50=99.70%, 250=0.30% 00:28:55.464 cpu : usr=98.86%, sys=0.74%, ctx=12, majf=0, minf=55 00:28:55.464 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.464 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.464 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.464 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.464 filename2: (groupid=0, jobs=1): err= 0: pid=3564995: Tue Jul 16 01:34:19 2024 00:28:55.464 read: IOPS=529, BW=2119KiB/s (2170kB/s)(20.9MiB/10088msec) 00:28:55.464 slat (nsec): min=9151, max=71888, avg=31153.80, stdev=13930.53 00:28:55.464 clat (msec): min=28, max=112, avg=29.94, stdev= 4.72 00:28:55.464 lat (msec): min=28, max=112, avg=29.97, stdev= 4.72 00:28:55.464 clat percentiles (msec): 00:28:55.464 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:28:55.464 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:28:55.464 | 70.00th=[ 30], 80.00th=[ 30], 90.00th=[ 31], 95.00th=[ 31], 00:28:55.464 | 99.00th=[ 32], 99.50th=[ 53], 99.90th=[ 113], 99.95th=[ 113], 00:28:55.464 | 99.99th=[ 113] 00:28:55.464 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2130.70, stdev=74.86, samples=20 00:28:55.464 iops : min= 480, max= 544, avg=532.60, stdev=18.67, samples=20 00:28:55.464 lat (msec) : 50=99.40%, 100=0.30%, 250=0.30% 00:28:55.464 cpu : usr=98.83%, sys=0.77%, ctx=10, majf=0, minf=54 00:28:55.464 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:55.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.464 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:55.464 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:55.464 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:55.464 00:28:55.464 Run status group 0 (all jobs): 00:28:55.464 READ: bw=49.8MiB/s (52.2MB/s), 2119KiB/s-2170KiB/s (2169kB/s-2223kB/s), io=503MiB (528MB), run=10002-10113msec 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 bdev_null0 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 [2024-07-16 01:34:20.224454] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 bdev_null1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.464 01:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.464 { 00:28:55.464 "params": { 00:28:55.464 "name": "Nvme$subsystem", 00:28:55.464 "trtype": "$TEST_TRANSPORT", 00:28:55.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.464 "adrfam": "ipv4", 00:28:55.464 "trsvcid": "$NVMF_PORT", 00:28:55.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.464 "hdgst": ${hdgst:-false}, 00:28:55.465 "ddgst": ${ddgst:-false} 00:28:55.465 }, 00:28:55.465 "method": "bdev_nvme_attach_controller" 00:28:55.465 } 00:28:55.465 EOF 00:28:55.465 )") 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:55.465 { 00:28:55.465 "params": { 00:28:55.465 "name": "Nvme$subsystem", 00:28:55.465 "trtype": "$TEST_TRANSPORT", 00:28:55.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.465 "adrfam": "ipv4", 00:28:55.465 "trsvcid": "$NVMF_PORT", 00:28:55.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.465 "hdgst": ${hdgst:-false}, 00:28:55.465 "ddgst": ${ddgst:-false} 00:28:55.465 }, 00:28:55.465 "method": "bdev_nvme_attach_controller" 00:28:55.465 } 00:28:55.465 EOF 00:28:55.465 )") 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:55.465 "params": { 00:28:55.465 "name": "Nvme0", 00:28:55.465 "trtype": "tcp", 00:28:55.465 "traddr": "10.0.0.2", 00:28:55.465 "adrfam": "ipv4", 00:28:55.465 "trsvcid": "4420", 00:28:55.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:55.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:55.465 "hdgst": false, 00:28:55.465 "ddgst": false 00:28:55.465 }, 00:28:55.465 "method": "bdev_nvme_attach_controller" 00:28:55.465 },{ 00:28:55.465 "params": { 00:28:55.465 "name": "Nvme1", 00:28:55.465 "trtype": "tcp", 00:28:55.465 "traddr": "10.0.0.2", 00:28:55.465 "adrfam": "ipv4", 00:28:55.465 "trsvcid": "4420", 00:28:55.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.465 "hdgst": false, 00:28:55.465 "ddgst": false 00:28:55.465 }, 00:28:55.465 "method": "bdev_nvme_attach_controller" 00:28:55.465 }' 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:55.465 01:34:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.465 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:55.465 ... 00:28:55.465 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:55.465 ... 00:28:55.465 fio-3.35 00:28:55.465 Starting 4 threads 00:28:55.465 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.735 00:29:00.735 filename0: (groupid=0, jobs=1): err= 0: pid=3566942: Tue Jul 16 01:34:26 2024 00:29:00.735 read: IOPS=2801, BW=21.9MiB/s (22.9MB/s)(109MiB/5002msec) 00:29:00.735 slat (nsec): min=4620, max=69889, avg=12541.94, stdev=9343.79 00:29:00.735 clat (usec): min=964, max=49222, avg=2817.13, stdev=1183.03 00:29:00.735 lat (usec): min=974, max=49235, avg=2829.67, stdev=1183.22 00:29:00.735 clat percentiles (usec): 00:29:00.735 | 1.00th=[ 1778], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2474], 00:29:00.735 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 2900], 00:29:00.735 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3163], 95.00th=[ 3425], 00:29:00.735 | 99.00th=[ 4228], 99.50th=[ 4490], 99.90th=[ 4883], 99.95th=[49021], 00:29:00.735 | 99.99th=[49021] 00:29:00.735 bw ( KiB/s): min=21568, max=23280, per=26.07%, avg=22406.40, stdev=661.90, samples=10 00:29:00.735 iops : min= 2696, max= 2910, avg=2800.80, stdev=82.74, samples=10 00:29:00.735 lat (usec) : 1000=0.01% 00:29:00.735 lat (msec) : 2=2.15%, 4=96.10%, 10=1.68%, 50=0.06% 00:29:00.735 cpu : usr=97.02%, sys=2.66%, ctx=8, majf=0, minf=80 00:29:00.735 IO depths : 1=0.4%, 2=7.0%, 4=63.4%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:00.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:00.735 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:00.735 issued rwts: total=14012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:00.735 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:00.735 filename0: (groupid=0, jobs=1): err= 0: pid=3566943: Tue Jul 16 01:34:26 2024 00:29:00.735 read: IOPS=2710, BW=21.2MiB/s (22.2MB/s)(106MiB/5002msec) 00:29:00.735 slat (nsec): min=5961, max=70336, avg=12361.06, stdev=6943.72 00:29:00.735 clat (usec): min=882, max=5194, avg=2914.86, stdev=411.49 00:29:00.735 lat (usec): min=905, max=5217, avg=2927.22, stdev=411.63 00:29:00.735 clat percentiles (usec): 00:29:00.735 | 1.00th=[ 2008], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2638], 00:29:00.735 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2933], 00:29:00.735 | 70.00th=[ 2999], 80.00th=[ 3130], 90.00th=[ 3326], 95.00th=[ 3621], 00:29:00.735 | 99.00th=[ 4359], 99.50th=[ 4621], 99.90th=[ 5080], 99.95th=[ 5080], 00:29:00.735 | 99.99th=[ 5145] 00:29:00.735 bw ( KiB/s): min=20960, max=23504, per=25.22%, avg=21681.60, stdev=751.92, samples=10 00:29:00.735 iops : min= 2620, max= 2938, avg=2710.20, stdev=93.99, samples=10 00:29:00.735 lat (usec) : 1000=0.02% 00:29:00.735 lat (msec) : 2=0.95%, 4=96.64%, 10=2.38% 00:29:00.735 cpu : usr=96.84%, sys=2.80%, ctx=23, majf=0, minf=81 00:29:00.735 IO depths : 1=0.2%, 2=4.4%, 4=67.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:00.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:00.735 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:00.735 issued rwts: total=13559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:00.735 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:00.735 filename1: (groupid=0, jobs=1): err= 0: pid=3566944: Tue Jul 16 01:34:26 2024 00:29:00.735 read: IOPS=2621, BW=20.5MiB/s (21.5MB/s)(102MiB/5002msec) 00:29:00.735 slat (nsec): min=5896, max=70046, avg=12185.53, stdev=9158.83 00:29:00.735 clat (usec): min=642, max=5419, avg=3014.60, stdev=411.60 00:29:00.735 lat (usec): min=654, max=5446, avg=3026.79, stdev=411.19 00:29:00.735 clat percentiles (usec): 00:29:00.735 | 1.00th=[ 2057], 5.00th=[ 2442], 10.00th=[ 2671], 20.00th=[ 2802], 00:29:00.735 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2966], 00:29:00.735 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3458], 95.00th=[ 3752], 00:29:00.735 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5080], 99.95th=[ 5145], 00:29:00.735 | 99.99th=[ 5342] 00:29:00.735 bw ( KiB/s): min=20304, max=21344, per=24.39%, avg=20970.50, stdev=295.53, samples=10 00:29:00.735 iops : min= 2538, max= 2668, avg=2621.30, stdev=36.94, samples=10 00:29:00.735 lat (usec) : 750=0.02%, 1000=0.03% 00:29:00.735 lat (msec) : 2=0.82%, 4=95.83%, 10=3.31% 00:29:00.735 cpu : usr=97.04%, sys=2.62%, ctx=8, majf=0, minf=67 00:29:00.735 IO depths : 1=0.1%, 2=3.7%, 4=69.1%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:00.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:00.735 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:00.735 issued rwts: total=13112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:00.735 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:00.735 filename1: (groupid=0, jobs=1): err= 0: pid=3566945: Tue Jul 16 01:34:26 2024 00:29:00.735 read: IOPS=2612, BW=20.4MiB/s (21.4MB/s)(102MiB/5001msec) 00:29:00.735 slat (nsec): min=5905, max=70070, avg=12901.94, stdev=9686.97 00:29:00.735 clat (usec): min=675, max=5405, avg=3025.38, stdev=416.57 00:29:00.735 lat (usec): min=687, max=5418, avg=3038.29, stdev=415.95 00:29:00.735 clat percentiles (usec): 00:29:00.735 | 1.00th=[ 2114], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2835], 00:29:00.735 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2966], 00:29:00.735 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3490], 95.00th=[ 3851], 00:29:00.735 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5080], 99.95th=[ 5145], 00:29:00.735 | 99.99th=[ 5145] 00:29:00.735 bw ( KiB/s): min=19952, max=21328, per=24.31%, avg=20893.60, stdev=423.12, samples=10 00:29:00.735 iops : min= 2494, max= 2666, avg=2611.70, stdev=52.89, samples=10 00:29:00.735 lat (usec) : 750=0.01%, 1000=0.01% 00:29:00.735 lat (msec) : 2=0.55%, 4=95.37%, 10=4.06% 00:29:00.735 cpu : usr=96.86%, sys=2.82%, ctx=7, majf=0, minf=82 00:29:00.735 IO depths : 1=0.4%, 2=3.5%, 4=66.9%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:00.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:00.735 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:00.735 issued rwts: total=13064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:00.735 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:00.735 00:29:00.735 Run status group 0 (all jobs): 00:29:00.735 READ: bw=83.9MiB/s (88.0MB/s), 20.4MiB/s-21.9MiB/s (21.4MB/s-22.9MB/s), io=420MiB (440MB), run=5001-5002msec 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.735 00:29:00.735 real 0m24.440s 00:29:00.735 user 4m54.185s 00:29:00.735 sys 0m4.191s 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:00.735 01:34:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:00.735 ************************************ 00:29:00.735 END TEST fio_dif_rand_params 00:29:00.735 ************************************ 00:29:00.735 01:34:26 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:00.735 01:34:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:00.735 01:34:26 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:00.735 01:34:26 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.735 01:34:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:00.735 ************************************ 00:29:00.735 START TEST fio_dif_digest 00:29:00.735 ************************************ 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:00.736 bdev_null0 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:00.736 [2024-07-16 01:34:26.619834] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.736 { 00:29:00.736 "params": { 00:29:00.736 "name": "Nvme$subsystem", 00:29:00.736 "trtype": "$TEST_TRANSPORT", 00:29:00.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.736 "adrfam": "ipv4", 00:29:00.736 "trsvcid": "$NVMF_PORT", 00:29:00.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.736 "hdgst": ${hdgst:-false}, 00:29:00.736 "ddgst": ${ddgst:-false} 00:29:00.736 }, 00:29:00.736 "method": "bdev_nvme_attach_controller" 00:29:00.736 } 00:29:00.736 EOF 00:29:00.736 )") 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:00.736 "params": { 00:29:00.736 "name": "Nvme0", 00:29:00.736 "trtype": "tcp", 00:29:00.736 "traddr": "10.0.0.2", 00:29:00.736 "adrfam": "ipv4", 00:29:00.736 "trsvcid": "4420", 00:29:00.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:00.736 "hdgst": true, 00:29:00.736 "ddgst": true 00:29:00.736 }, 00:29:00.736 "method": "bdev_nvme_attach_controller" 00:29:00.736 }' 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:00.736 01:34:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:00.992 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:00.992 ... 00:29:00.992 fio-3.35 00:29:00.992 Starting 3 threads 00:29:01.249 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.434 00:29:13.434 filename0: (groupid=0, jobs=1): err= 0: pid=3568005: Tue Jul 16 01:34:37 2024 00:29:13.434 read: IOPS=295, BW=36.9MiB/s (38.7MB/s)(371MiB/10045msec) 00:29:13.434 slat (nsec): min=6315, max=29903, avg=11469.72, stdev=1873.74 00:29:13.434 clat (usec): min=5058, max=49986, avg=10122.11, stdev=1236.39 00:29:13.434 lat (usec): min=5068, max=49998, avg=10133.58, stdev=1236.36 00:29:13.434 clat percentiles (usec): 00:29:13.434 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:29:13.434 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:29:13.434 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:29:13.434 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12518], 99.95th=[48497], 00:29:13.434 | 99.99th=[50070] 00:29:13.434 bw ( KiB/s): min=36608, max=39424, per=35.03%, avg=37977.60, stdev=706.11, samples=20 00:29:13.434 iops : min= 286, max= 308, avg=296.70, stdev= 5.52, samples=20 00:29:13.434 lat (msec) : 10=42.98%, 20=56.96%, 50=0.07% 00:29:13.434 cpu : usr=93.89%, sys=5.80%, ctx=26, majf=0, minf=99 00:29:13.434 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:13.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.434 issued rwts: total=2969,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.434 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:13.434 filename0: (groupid=0, jobs=1): err= 0: pid=3568006: Tue Jul 16 01:34:37 2024 00:29:13.434 read: IOPS=277, BW=34.6MiB/s (36.3MB/s)(348MiB/10045msec) 00:29:13.434 slat (nsec): min=6183, max=24821, avg=11430.83, stdev=1816.51 00:29:13.434 clat (usec): min=8358, max=49756, avg=10799.57, stdev=1244.38 00:29:13.434 lat (usec): min=8369, max=49772, avg=10811.00, stdev=1244.44 00:29:13.434 clat percentiles (usec): 00:29:13.434 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:29:13.434 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:29:13.434 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:29:13.434 | 99.00th=[12780], 99.50th=[12911], 99.90th=[14222], 99.95th=[46924], 00:29:13.434 | 99.99th=[49546] 00:29:13.434 bw ( KiB/s): min=34304, max=36864, per=32.83%, avg=35596.80, stdev=656.49, samples=20 00:29:13.434 iops : min= 268, max= 288, avg=278.10, stdev= 5.13, samples=20 00:29:13.434 lat (msec) : 10=13.80%, 20=86.13%, 50=0.07% 00:29:13.434 cpu : usr=94.46%, sys=5.24%, ctx=23, majf=0, minf=135 00:29:13.434 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:13.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.434 issued rwts: total=2783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.434 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:13.434 filename0: (groupid=0, jobs=1): err= 0: pid=3568007: Tue Jul 16 01:34:37 2024 00:29:13.434 read: IOPS=274, BW=34.3MiB/s (36.0MB/s)(345MiB/10045msec) 00:29:13.434 slat (nsec): min=6332, max=32191, avg=11400.95, stdev=2059.67 00:29:13.434 clat (usec): min=6594, max=47045, avg=10901.47, stdev=1230.60 00:29:13.434 lat (usec): min=6606, max=47058, avg=10912.87, stdev=1230.57 00:29:13.434 clat percentiles (usec): 00:29:13.434 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:29:13.434 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:29:13.434 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:29:13.434 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13960], 99.95th=[44827], 00:29:13.434 | 99.99th=[46924] 00:29:13.434 bw ( KiB/s): min=34304, max=36352, per=32.51%, avg=35251.20, stdev=505.90, samples=20 00:29:13.434 iops : min= 268, max= 284, avg=275.40, stdev= 3.95, samples=20 00:29:13.434 lat (msec) : 10=12.26%, 20=87.67%, 50=0.07% 00:29:13.434 cpu : usr=94.51%, sys=5.19%, ctx=24, majf=0, minf=153 00:29:13.434 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:13.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.434 issued rwts: total=2757,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.434 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:13.434 00:29:13.434 Run status group 0 (all jobs): 00:29:13.434 READ: bw=106MiB/s (111MB/s), 34.3MiB/s-36.9MiB/s (36.0MB/s-38.7MB/s), io=1064MiB (1115MB), run=10045-10045msec 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.434 00:29:13.434 real 0m11.196s 00:29:13.434 user 0m35.550s 00:29:13.434 sys 0m1.906s 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:13.434 01:34:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:13.434 ************************************ 00:29:13.434 END TEST fio_dif_digest 00:29:13.434 ************************************ 00:29:13.434 01:34:37 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:13.434 01:34:37 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:13.434 01:34:37 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:13.434 01:34:37 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:13.434 01:34:37 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:13.434 01:34:37 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:13.434 01:34:37 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:13.434 01:34:37 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:13.434 01:34:37 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:13.434 rmmod nvme_tcp 00:29:13.434 rmmod nvme_fabrics 00:29:13.434 rmmod nvme_keyring 00:29:13.434 01:34:37 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:13.434 01:34:37 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:13.434 01:34:37 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:13.434 01:34:37 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3559378 ']' 00:29:13.434 01:34:37 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3559378 00:29:13.434 01:34:37 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3559378 ']' 00:29:13.434 01:34:37 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3559378 00:29:13.434 01:34:37 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:13.434 01:34:37 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:13.434 01:34:37 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3559378 00:29:13.434 01:34:37 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:13.434 01:34:37 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:13.434 01:34:37 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3559378' 00:29:13.434 killing process with pid 3559378 00:29:13.434 01:34:37 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3559378 00:29:13.434 01:34:37 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3559378 00:29:13.434 01:34:38 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:13.434 01:34:38 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:14.803 Waiting for block devices as requested 00:29:14.803 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:14.803 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:14.803 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:14.803 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:15.058 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:15.058 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:15.058 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:15.058 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:15.314 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:15.314 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:15.314 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:15.570 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:15.570 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:15.570 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:15.570 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:15.827 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:15.827 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:15.827 01:34:41 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:15.827 01:34:41 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:15.827 01:34:41 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:15.827 01:34:41 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:15.827 01:34:41 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.827 01:34:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:15.827 01:34:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.353 01:34:43 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:18.353 00:29:18.353 real 1m13.280s 00:29:18.353 user 7m12.260s 00:29:18.353 sys 0m18.385s 00:29:18.353 01:34:43 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:18.353 01:34:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:18.353 ************************************ 00:29:18.353 END TEST nvmf_dif 00:29:18.353 ************************************ 00:29:18.353 01:34:43 -- common/autotest_common.sh@1142 -- # return 0 00:29:18.353 01:34:43 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:18.353 01:34:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:18.353 01:34:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:18.353 01:34:43 -- common/autotest_common.sh@10 -- # set +x 00:29:18.353 ************************************ 00:29:18.353 START TEST nvmf_abort_qd_sizes 00:29:18.353 ************************************ 00:29:18.353 01:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:18.353 * Looking for test storage... 00:29:18.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:18.353 01:34:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.353 01:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:18.353 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.353 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:18.354 01:34:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:23.611 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:23.611 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:23.611 Found net devices under 0000:86:00.0: cvl_0_0 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:23.611 Found net devices under 0000:86:00.1: cvl_0_1 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.611 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:23.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:29:23.612 00:29:23.612 --- 10.0.0.2 ping statistics --- 00:29:23.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.612 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:29:23.612 00:29:23.612 --- 10.0.0.1 ping statistics --- 00:29:23.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.612 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:23.612 01:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:26.155 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:26.155 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:27.526 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:27.782 01:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.782 01:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:27.782 01:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:27.782 01:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.782 01:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:27.782 01:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3575790 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3575790 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3575790 ']' 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.783 01:34:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:27.783 [2024-07-16 01:34:53.673568] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:29:27.783 [2024-07-16 01:34:53.673613] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.783 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.783 [2024-07-16 01:34:53.732265] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:28.038 [2024-07-16 01:34:53.812778] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.038 [2024-07-16 01:34:53.812813] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.038 [2024-07-16 01:34:53.812820] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.038 [2024-07-16 01:34:53.812825] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.038 [2024-07-16 01:34:53.812830] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.038 [2024-07-16 01:34:53.812887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.038 [2024-07-16 01:34:53.812983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.038 [2024-07-16 01:34:53.813083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.038 [2024-07-16 01:34:53.813084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:29:28.598 01:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:28.599 01:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:28.599 ************************************ 00:29:28.599 START TEST spdk_target_abort 00:29:28.599 ************************************ 00:29:28.599 01:34:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:29:28.599 01:34:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:28.599 01:34:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:29:28.599 01:34:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.599 01:34:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:31.872 spdk_targetn1 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:31.872 [2024-07-16 01:34:57.390961] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:31.872 [2024-07-16 01:34:57.419806] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:31.872 01:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:31.872 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.237 Initializing NVMe Controllers 00:29:35.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:35.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:35.237 Initialization complete. Launching workers. 00:29:35.237 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16474, failed: 0 00:29:35.237 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1318, failed to submit 15156 00:29:35.237 success 733, unsuccess 585, failed 0 00:29:35.237 01:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:35.237 01:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:35.237 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.511 Initializing NVMe Controllers 00:29:38.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:38.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:38.511 Initialization complete. Launching workers. 00:29:38.511 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8706, failed: 0 00:29:38.511 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1263, failed to submit 7443 00:29:38.511 success 329, unsuccess 934, failed 0 00:29:38.511 01:35:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:38.511 01:35:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:38.511 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.784 Initializing NVMe Controllers 00:29:41.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:41.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:41.784 Initialization complete. Launching workers. 00:29:41.784 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38811, failed: 0 00:29:41.784 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2871, failed to submit 35940 00:29:41.784 success 607, unsuccess 2264, failed 0 00:29:41.784 01:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:41.784 01:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.784 01:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.784 01:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:41.784 01:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:41.784 01:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:41.784 01:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3575790 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3575790 ']' 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3575790 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3575790 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3575790' 00:29:43.153 killing process with pid 3575790 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3575790 00:29:43.153 01:35:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3575790 00:29:43.410 00:29:43.410 real 0m14.618s 00:29:43.410 user 0m58.287s 00:29:43.410 sys 0m2.260s 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:43.410 ************************************ 00:29:43.410 END TEST spdk_target_abort 00:29:43.410 ************************************ 00:29:43.410 01:35:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:43.410 01:35:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:43.410 01:35:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:43.410 01:35:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.410 01:35:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:43.410 ************************************ 00:29:43.410 START TEST kernel_target_abort 00:29:43.410 ************************************ 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:43.410 01:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:45.934 Waiting for block devices as requested 00:29:45.934 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:45.934 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:46.191 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:46.191 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:46.191 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:46.191 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:46.447 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:46.447 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:46.447 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:46.703 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:46.703 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:46.703 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:46.703 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:46.961 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:46.961 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:46.961 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:47.218 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:47.218 No valid GPT data, bailing 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:47.218 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:47.473 00:29:47.473 Discovery Log Number of Records 2, Generation counter 2 00:29:47.473 =====Discovery Log Entry 0====== 00:29:47.473 trtype: tcp 00:29:47.473 adrfam: ipv4 00:29:47.473 subtype: current discovery subsystem 00:29:47.473 treq: not specified, sq flow control disable supported 00:29:47.473 portid: 1 00:29:47.473 trsvcid: 4420 00:29:47.473 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:47.473 traddr: 10.0.0.1 00:29:47.473 eflags: none 00:29:47.473 sectype: none 00:29:47.473 =====Discovery Log Entry 1====== 00:29:47.473 trtype: tcp 00:29:47.473 adrfam: ipv4 00:29:47.473 subtype: nvme subsystem 00:29:47.473 treq: not specified, sq flow control disable supported 00:29:47.473 portid: 1 00:29:47.473 trsvcid: 4420 00:29:47.473 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:47.473 traddr: 10.0.0.1 00:29:47.473 eflags: none 00:29:47.473 sectype: none 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:47.474 01:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:47.474 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.766 Initializing NVMe Controllers 00:29:50.766 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:50.766 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:50.766 Initialization complete. Launching workers. 00:29:50.766 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92680, failed: 0 00:29:50.766 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92680, failed to submit 0 00:29:50.766 success 0, unsuccess 92680, failed 0 00:29:50.766 01:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:50.766 01:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:50.766 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.055 Initializing NVMe Controllers 00:29:54.055 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:54.055 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:54.055 Initialization complete. Launching workers. 00:29:54.055 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 149640, failed: 0 00:29:54.055 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37146, failed to submit 112494 00:29:54.055 success 0, unsuccess 37146, failed 0 00:29:54.055 01:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:54.055 01:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:54.055 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.579 Initializing NVMe Controllers 00:29:56.579 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:56.579 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:56.579 Initialization complete. Launching workers. 00:29:56.579 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141365, failed: 0 00:29:56.579 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35398, failed to submit 105967 00:29:56.579 success 0, unsuccess 35398, failed 0 00:29:56.579 01:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:56.579 01:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:56.579 01:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:56.579 01:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:56.579 01:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:56.579 01:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:56.579 01:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:56.580 01:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:56.580 01:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:56.580 01:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:59.106 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:59.106 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:00.480 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:00.480 00:30:00.480 real 0m17.152s 00:30:00.480 user 0m8.547s 00:30:00.480 sys 0m4.534s 00:30:00.480 01:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:00.480 01:35:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:00.480 ************************************ 00:30:00.480 END TEST kernel_target_abort 00:30:00.480 ************************************ 00:30:00.480 01:35:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:00.480 01:35:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:00.480 01:35:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:00.480 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:00.480 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:00.480 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:00.480 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:00.480 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:00.480 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:00.480 rmmod nvme_tcp 00:30:00.480 rmmod nvme_fabrics 00:30:00.736 rmmod nvme_keyring 00:30:00.736 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:00.736 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:00.736 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:00.736 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3575790 ']' 00:30:00.736 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3575790 00:30:00.736 01:35:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3575790 ']' 00:30:00.736 01:35:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3575790 00:30:00.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3575790) - No such process 00:30:00.736 01:35:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3575790 is not found' 00:30:00.736 Process with pid 3575790 is not found 00:30:00.736 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:00.736 01:35:26 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:03.278 Waiting for block devices as requested 00:30:03.278 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:03.278 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:03.278 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:03.278 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:03.278 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:03.278 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:03.278 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:03.548 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:03.548 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:03.548 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:03.548 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:03.806 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:03.806 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:03.806 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:03.806 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:04.064 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:04.064 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:04.064 01:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:04.064 01:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:04.064 01:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:04.064 01:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:04.064 01:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.064 01:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:04.064 01:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.590 01:35:32 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:06.590 00:30:06.590 real 0m48.185s 00:30:06.590 user 1m10.721s 00:30:06.590 sys 0m14.634s 00:30:06.590 01:35:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:06.590 01:35:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:06.590 ************************************ 00:30:06.590 END TEST nvmf_abort_qd_sizes 00:30:06.590 ************************************ 00:30:06.590 01:35:32 -- common/autotest_common.sh@1142 -- # return 0 00:30:06.590 01:35:32 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:06.590 01:35:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:06.590 01:35:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:06.590 01:35:32 -- common/autotest_common.sh@10 -- # set +x 00:30:06.590 ************************************ 00:30:06.590 START TEST keyring_file 00:30:06.590 ************************************ 00:30:06.590 01:35:32 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:06.590 * Looking for test storage... 00:30:06.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:06.590 01:35:32 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:06.590 01:35:32 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.590 01:35:32 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.590 01:35:32 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.590 01:35:32 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.590 01:35:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.590 01:35:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.590 01:35:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.590 01:35:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:06.590 01:35:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:06.590 01:35:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:06.590 01:35:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:06.590 01:35:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:06.590 01:35:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:06.590 01:35:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:06.590 01:35:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:06.590 01:35:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:06.590 01:35:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:06.590 01:35:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:06.590 01:35:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:06.590 01:35:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:06.590 01:35:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:06.590 01:35:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.EpwuMCTTzC 00:30:06.590 01:35:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:06.590 01:35:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:06.591 01:35:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:06.591 01:35:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.EpwuMCTTzC 00:30:06.591 01:35:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.EpwuMCTTzC 00:30:06.591 01:35:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.EpwuMCTTzC 00:30:06.591 01:35:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:06.591 01:35:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:06.591 01:35:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:06.591 01:35:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:06.591 01:35:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:06.591 01:35:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:06.591 01:35:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vbpIwf6XJm 00:30:06.591 01:35:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:06.591 01:35:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:06.591 01:35:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:06.591 01:35:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:06.591 01:35:32 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:06.591 01:35:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:06.591 01:35:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:06.591 01:35:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vbpIwf6XJm 00:30:06.591 01:35:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vbpIwf6XJm 00:30:06.591 01:35:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.vbpIwf6XJm 00:30:06.591 01:35:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=3584544 00:30:06.591 01:35:32 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:06.591 01:35:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3584544 00:30:06.591 01:35:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3584544 ']' 00:30:06.591 01:35:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.591 01:35:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:06.591 01:35:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.591 01:35:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:06.591 01:35:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:06.591 [2024-07-16 01:35:32.438079] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:30:06.591 [2024-07-16 01:35:32.438128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584544 ] 00:30:06.591 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.591 [2024-07-16 01:35:32.493592] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.591 [2024-07-16 01:35:32.564608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:07.523 01:35:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:07.523 [2024-07-16 01:35:33.230196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.523 null0 00:30:07.523 [2024-07-16 01:35:33.262250] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:07.523 [2024-07-16 01:35:33.262588] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:07.523 [2024-07-16 01:35:33.270261] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:07.523 01:35:33 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:07.523 [2024-07-16 01:35:33.282291] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:07.523 request: 00:30:07.523 { 00:30:07.523 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:07.523 "secure_channel": false, 00:30:07.523 "listen_address": { 00:30:07.523 "trtype": "tcp", 00:30:07.523 "traddr": "127.0.0.1", 00:30:07.523 "trsvcid": "4420" 00:30:07.523 }, 00:30:07.523 "method": "nvmf_subsystem_add_listener", 00:30:07.523 "req_id": 1 00:30:07.523 } 00:30:07.523 Got JSON-RPC error response 00:30:07.523 response: 00:30:07.523 { 00:30:07.523 "code": -32602, 00:30:07.523 "message": "Invalid parameters" 00:30:07.523 } 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:07.523 01:35:33 keyring_file -- keyring/file.sh@46 -- # bperfpid=3584583 00:30:07.523 01:35:33 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3584583 /var/tmp/bperf.sock 00:30:07.523 01:35:33 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3584583 ']' 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:07.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:07.523 01:35:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:07.523 [2024-07-16 01:35:33.331891] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:30:07.523 [2024-07-16 01:35:33.331933] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3584583 ] 00:30:07.523 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.523 [2024-07-16 01:35:33.386476] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.523 [2024-07-16 01:35:33.464655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.451 01:35:34 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:08.451 01:35:34 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:08.451 01:35:34 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EpwuMCTTzC 00:30:08.451 01:35:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EpwuMCTTzC 00:30:08.451 01:35:34 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vbpIwf6XJm 00:30:08.451 01:35:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vbpIwf6XJm 00:30:08.708 01:35:34 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:08.708 01:35:34 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:08.708 01:35:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:08.708 01:35:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:08.708 01:35:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:08.708 01:35:34 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.EpwuMCTTzC == \/\t\m\p\/\t\m\p\.\E\p\w\u\M\C\T\T\z\C ]] 00:30:08.708 01:35:34 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:08.708 01:35:34 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:08.708 01:35:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:08.708 01:35:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:08.708 01:35:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:08.964 01:35:34 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vbpIwf6XJm == \/\t\m\p\/\t\m\p\.\v\b\p\I\w\f\6\X\J\m ]] 00:30:08.964 01:35:34 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:08.964 01:35:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:08.964 01:35:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:08.964 01:35:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:08.964 01:35:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:08.964 01:35:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:09.221 01:35:35 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:09.221 01:35:35 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:09.221 01:35:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:09.221 01:35:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:09.221 01:35:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:09.221 01:35:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:09.221 01:35:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:09.221 01:35:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:09.221 01:35:35 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:09.221 01:35:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:09.478 [2024-07-16 01:35:35.362476] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:09.478 nvme0n1 00:30:09.478 01:35:35 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:09.478 01:35:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:09.478 01:35:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:09.478 01:35:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:09.478 01:35:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:09.478 01:35:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:09.735 01:35:35 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:09.735 01:35:35 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:09.735 01:35:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:09.735 01:35:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:09.735 01:35:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:09.735 01:35:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:09.735 01:35:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:09.991 01:35:35 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:09.991 01:35:35 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:09.991 Running I/O for 1 seconds... 00:30:10.924 00:30:10.924 Latency(us) 00:30:10.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.924 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:10.924 nvme0n1 : 1.00 18385.17 71.82 0.00 0.00 6947.02 2902.31 10173.68 00:30:10.924 =================================================================================================================== 00:30:10.924 Total : 18385.17 71.82 0.00 0.00 6947.02 2902.31 10173.68 00:30:10.924 0 00:30:10.924 01:35:36 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:10.924 01:35:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:11.182 01:35:37 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:11.182 01:35:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:11.182 01:35:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:11.182 01:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:11.182 01:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:11.182 01:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:11.439 01:35:37 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:11.439 01:35:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:11.439 01:35:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:11.439 01:35:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:11.439 01:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:11.439 01:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:11.439 01:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:11.696 01:35:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:11.696 01:35:37 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:11.696 01:35:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:11.696 01:35:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:11.696 01:35:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:11.696 01:35:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:11.696 01:35:37 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:11.696 01:35:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:11.696 01:35:37 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:11.696 01:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:11.696 [2024-07-16 01:35:37.595176] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:11.696 [2024-07-16 01:35:37.595291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61970 (107): Transport endpoint is not connected 00:30:11.696 [2024-07-16 01:35:37.596286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61970 (9): Bad file descriptor 00:30:11.696 [2024-07-16 01:35:37.597287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:11.696 [2024-07-16 01:35:37.597296] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:11.696 [2024-07-16 01:35:37.597303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:11.696 request: 00:30:11.696 { 00:30:11.696 "name": "nvme0", 00:30:11.696 "trtype": "tcp", 00:30:11.696 "traddr": "127.0.0.1", 00:30:11.696 "adrfam": "ipv4", 00:30:11.696 "trsvcid": "4420", 00:30:11.696 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:11.696 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:11.696 "prchk_reftag": false, 00:30:11.696 "prchk_guard": false, 00:30:11.696 "hdgst": false, 00:30:11.696 "ddgst": false, 00:30:11.696 "psk": "key1", 00:30:11.696 "method": "bdev_nvme_attach_controller", 00:30:11.696 "req_id": 1 00:30:11.696 } 00:30:11.696 Got JSON-RPC error response 00:30:11.696 response: 00:30:11.696 { 00:30:11.696 "code": -5, 00:30:11.696 "message": "Input/output error" 00:30:11.696 } 00:30:11.696 01:35:37 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:11.696 01:35:37 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:11.696 01:35:37 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:11.696 01:35:37 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:11.696 01:35:37 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:11.696 01:35:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:11.696 01:35:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:11.696 01:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:11.696 01:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:11.696 01:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:11.953 01:35:37 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:11.953 01:35:37 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:11.953 01:35:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:11.953 01:35:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:11.953 01:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:11.953 01:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:11.953 01:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:12.210 01:35:37 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:12.210 01:35:37 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:12.210 01:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:12.210 01:35:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:12.210 01:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:12.467 01:35:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:12.467 01:35:38 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:12.467 01:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:12.724 01:35:38 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:12.724 01:35:38 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.EpwuMCTTzC 00:30:12.724 01:35:38 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.EpwuMCTTzC 00:30:12.724 01:35:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:12.724 01:35:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.EpwuMCTTzC 00:30:12.724 01:35:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:12.724 01:35:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:12.724 01:35:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:12.724 01:35:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:12.724 01:35:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EpwuMCTTzC 00:30:12.724 01:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EpwuMCTTzC 00:30:12.724 [2024-07-16 01:35:38.621006] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.EpwuMCTTzC': 0100660 00:30:12.724 [2024-07-16 01:35:38.621031] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:12.724 request: 00:30:12.724 { 00:30:12.724 "name": "key0", 00:30:12.724 "path": "/tmp/tmp.EpwuMCTTzC", 00:30:12.724 "method": "keyring_file_add_key", 00:30:12.724 "req_id": 1 00:30:12.724 } 00:30:12.724 Got JSON-RPC error response 00:30:12.724 response: 00:30:12.724 { 00:30:12.724 "code": -1, 00:30:12.724 "message": "Operation not permitted" 00:30:12.724 } 00:30:12.724 01:35:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:12.724 01:35:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:12.724 01:35:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:12.724 01:35:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:12.724 01:35:38 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.EpwuMCTTzC 00:30:12.724 01:35:38 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.EpwuMCTTzC 00:30:12.724 01:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.EpwuMCTTzC 00:30:12.981 01:35:38 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.EpwuMCTTzC 00:30:12.981 01:35:38 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:12.981 01:35:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:12.981 01:35:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:12.981 01:35:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:12.981 01:35:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:12.981 01:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:13.238 01:35:38 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:13.238 01:35:38 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:13.238 01:35:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:13.238 01:35:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:13.238 01:35:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:13.238 01:35:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:13.238 01:35:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:13.238 01:35:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:13.238 01:35:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:13.238 01:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:13.238 [2024-07-16 01:35:39.142393] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.EpwuMCTTzC': No such file or directory 00:30:13.238 [2024-07-16 01:35:39.142412] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:13.238 [2024-07-16 01:35:39.142430] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:13.238 [2024-07-16 01:35:39.142452] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:13.238 [2024-07-16 01:35:39.142459] bdev_nvme.c:6276:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:13.238 request: 00:30:13.238 { 00:30:13.238 "name": "nvme0", 00:30:13.238 "trtype": "tcp", 00:30:13.238 "traddr": "127.0.0.1", 00:30:13.238 "adrfam": "ipv4", 00:30:13.238 "trsvcid": "4420", 00:30:13.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:13.238 "prchk_reftag": false, 00:30:13.238 "prchk_guard": false, 00:30:13.238 "hdgst": false, 00:30:13.238 "ddgst": false, 00:30:13.238 "psk": "key0", 00:30:13.238 "method": "bdev_nvme_attach_controller", 00:30:13.238 "req_id": 1 00:30:13.238 } 00:30:13.238 Got JSON-RPC error response 00:30:13.238 response: 00:30:13.238 { 00:30:13.238 "code": -19, 00:30:13.238 "message": "No such device" 00:30:13.238 } 00:30:13.238 01:35:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:13.238 01:35:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:13.238 01:35:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:13.238 01:35:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:13.238 01:35:39 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:13.238 01:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:13.495 01:35:39 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:13.495 01:35:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:13.495 01:35:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:13.495 01:35:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:13.495 01:35:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:13.495 01:35:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:13.495 01:35:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Tvg2jEvqvF 00:30:13.495 01:35:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:13.495 01:35:39 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:13.495 01:35:39 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:13.495 01:35:39 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:13.495 01:35:39 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:13.495 01:35:39 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:13.495 01:35:39 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:13.495 01:35:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Tvg2jEvqvF 00:30:13.495 01:35:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Tvg2jEvqvF 00:30:13.495 01:35:39 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Tvg2jEvqvF 00:30:13.495 01:35:39 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tvg2jEvqvF 00:30:13.495 01:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Tvg2jEvqvF 00:30:13.753 01:35:39 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:13.753 01:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:14.010 nvme0n1 00:30:14.010 01:35:39 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:14.010 01:35:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:14.010 01:35:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.010 01:35:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.010 01:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.010 01:35:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:14.010 01:35:39 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:14.010 01:35:39 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:14.010 01:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:14.267 01:35:40 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:14.267 01:35:40 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:14.267 01:35:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.267 01:35:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:14.267 01:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.525 01:35:40 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:14.525 01:35:40 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:14.525 01:35:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:14.525 01:35:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.525 01:35:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.525 01:35:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:14.525 01:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.525 01:35:40 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:14.525 01:35:40 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:14.525 01:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:14.782 01:35:40 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:14.782 01:35:40 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:14.782 01:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:15.038 01:35:40 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:15.039 01:35:40 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Tvg2jEvqvF 00:30:15.039 01:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Tvg2jEvqvF 00:30:15.039 01:35:41 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vbpIwf6XJm 00:30:15.039 01:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vbpIwf6XJm 00:30:15.295 01:35:41 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:15.295 01:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:15.553 nvme0n1 00:30:15.553 01:35:41 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:15.553 01:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:15.811 01:35:41 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:15.811 "subsystems": [ 00:30:15.811 { 00:30:15.811 "subsystem": "keyring", 00:30:15.811 "config": [ 00:30:15.811 { 00:30:15.811 "method": "keyring_file_add_key", 00:30:15.811 "params": { 00:30:15.811 "name": "key0", 00:30:15.811 "path": "/tmp/tmp.Tvg2jEvqvF" 00:30:15.811 } 00:30:15.811 }, 00:30:15.811 { 00:30:15.811 "method": "keyring_file_add_key", 00:30:15.811 "params": { 00:30:15.811 "name": "key1", 00:30:15.811 "path": "/tmp/tmp.vbpIwf6XJm" 00:30:15.811 } 00:30:15.811 } 00:30:15.811 ] 00:30:15.811 }, 00:30:15.811 { 00:30:15.811 "subsystem": "iobuf", 00:30:15.811 "config": [ 00:30:15.811 { 00:30:15.811 "method": "iobuf_set_options", 00:30:15.811 "params": { 00:30:15.811 "small_pool_count": 8192, 00:30:15.811 "large_pool_count": 1024, 00:30:15.811 "small_bufsize": 8192, 00:30:15.811 "large_bufsize": 135168 00:30:15.811 } 00:30:15.811 } 00:30:15.811 ] 00:30:15.811 }, 00:30:15.811 { 00:30:15.811 "subsystem": "sock", 00:30:15.811 "config": [ 00:30:15.811 { 00:30:15.811 "method": "sock_set_default_impl", 00:30:15.811 "params": { 00:30:15.811 "impl_name": "posix" 00:30:15.811 } 00:30:15.811 }, 00:30:15.811 { 00:30:15.811 "method": "sock_impl_set_options", 00:30:15.811 "params": { 00:30:15.811 "impl_name": "ssl", 00:30:15.811 "recv_buf_size": 4096, 00:30:15.811 "send_buf_size": 4096, 00:30:15.811 "enable_recv_pipe": true, 00:30:15.811 "enable_quickack": false, 00:30:15.811 "enable_placement_id": 0, 00:30:15.811 "enable_zerocopy_send_server": true, 00:30:15.811 "enable_zerocopy_send_client": false, 00:30:15.811 "zerocopy_threshold": 0, 00:30:15.811 "tls_version": 0, 00:30:15.811 "enable_ktls": false 00:30:15.811 } 00:30:15.811 }, 00:30:15.811 { 00:30:15.811 "method": "sock_impl_set_options", 00:30:15.811 "params": { 00:30:15.811 "impl_name": "posix", 00:30:15.811 "recv_buf_size": 2097152, 00:30:15.811 "send_buf_size": 2097152, 00:30:15.811 "enable_recv_pipe": true, 00:30:15.811 "enable_quickack": false, 00:30:15.811 "enable_placement_id": 0, 00:30:15.811 "enable_zerocopy_send_server": true, 00:30:15.811 "enable_zerocopy_send_client": false, 00:30:15.811 "zerocopy_threshold": 0, 00:30:15.811 "tls_version": 0, 00:30:15.811 "enable_ktls": false 00:30:15.811 } 00:30:15.811 } 00:30:15.811 ] 00:30:15.811 }, 00:30:15.811 { 00:30:15.811 "subsystem": "vmd", 00:30:15.811 "config": [] 00:30:15.811 }, 00:30:15.811 { 00:30:15.811 "subsystem": "accel", 00:30:15.811 "config": [ 00:30:15.811 { 00:30:15.811 "method": "accel_set_options", 00:30:15.811 "params": { 00:30:15.811 "small_cache_size": 128, 00:30:15.811 "large_cache_size": 16, 00:30:15.811 "task_count": 2048, 00:30:15.811 "sequence_count": 2048, 00:30:15.811 "buf_count": 2048 00:30:15.811 } 00:30:15.811 } 00:30:15.811 ] 00:30:15.811 }, 00:30:15.811 { 00:30:15.811 "subsystem": "bdev", 00:30:15.811 "config": [ 00:30:15.811 { 00:30:15.811 "method": "bdev_set_options", 00:30:15.811 "params": { 00:30:15.811 "bdev_io_pool_size": 65535, 00:30:15.811 "bdev_io_cache_size": 256, 00:30:15.812 "bdev_auto_examine": true, 00:30:15.812 "iobuf_small_cache_size": 128, 00:30:15.812 "iobuf_large_cache_size": 16 00:30:15.812 } 00:30:15.812 }, 00:30:15.812 { 00:30:15.812 "method": "bdev_raid_set_options", 00:30:15.812 "params": { 00:30:15.812 "process_window_size_kb": 1024 00:30:15.812 } 00:30:15.812 }, 00:30:15.812 { 00:30:15.812 "method": "bdev_iscsi_set_options", 00:30:15.812 "params": { 00:30:15.812 "timeout_sec": 30 00:30:15.812 } 00:30:15.812 }, 00:30:15.812 { 00:30:15.812 "method": "bdev_nvme_set_options", 00:30:15.812 "params": { 00:30:15.812 "action_on_timeout": "none", 00:30:15.812 "timeout_us": 0, 00:30:15.812 "timeout_admin_us": 0, 00:30:15.812 "keep_alive_timeout_ms": 10000, 00:30:15.812 "arbitration_burst": 0, 00:30:15.812 "low_priority_weight": 0, 00:30:15.812 "medium_priority_weight": 0, 00:30:15.812 "high_priority_weight": 0, 00:30:15.812 "nvme_adminq_poll_period_us": 10000, 00:30:15.812 "nvme_ioq_poll_period_us": 0, 00:30:15.812 "io_queue_requests": 512, 00:30:15.812 "delay_cmd_submit": true, 00:30:15.812 "transport_retry_count": 4, 00:30:15.812 "bdev_retry_count": 3, 00:30:15.812 "transport_ack_timeout": 0, 00:30:15.812 "ctrlr_loss_timeout_sec": 0, 00:30:15.812 "reconnect_delay_sec": 0, 00:30:15.812 "fast_io_fail_timeout_sec": 0, 00:30:15.812 "disable_auto_failback": false, 00:30:15.812 "generate_uuids": false, 00:30:15.812 "transport_tos": 0, 00:30:15.812 "nvme_error_stat": false, 00:30:15.812 "rdma_srq_size": 0, 00:30:15.812 "io_path_stat": false, 00:30:15.812 "allow_accel_sequence": false, 00:30:15.812 "rdma_max_cq_size": 0, 00:30:15.812 "rdma_cm_event_timeout_ms": 0, 00:30:15.812 "dhchap_digests": [ 00:30:15.812 "sha256", 00:30:15.812 "sha384", 00:30:15.812 "sha512" 00:30:15.812 ], 00:30:15.812 "dhchap_dhgroups": [ 00:30:15.812 "null", 00:30:15.812 "ffdhe2048", 00:30:15.812 "ffdhe3072", 00:30:15.812 "ffdhe4096", 00:30:15.812 "ffdhe6144", 00:30:15.812 "ffdhe8192" 00:30:15.812 ] 00:30:15.812 } 00:30:15.812 }, 00:30:15.812 { 00:30:15.812 "method": "bdev_nvme_attach_controller", 00:30:15.812 "params": { 00:30:15.812 "name": "nvme0", 00:30:15.812 "trtype": "TCP", 00:30:15.812 "adrfam": "IPv4", 00:30:15.812 "traddr": "127.0.0.1", 00:30:15.812 "trsvcid": "4420", 00:30:15.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:15.812 "prchk_reftag": false, 00:30:15.812 "prchk_guard": false, 00:30:15.812 "ctrlr_loss_timeout_sec": 0, 00:30:15.812 "reconnect_delay_sec": 0, 00:30:15.812 "fast_io_fail_timeout_sec": 0, 00:30:15.812 "psk": "key0", 00:30:15.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:15.812 "hdgst": false, 00:30:15.812 "ddgst": false 00:30:15.812 } 00:30:15.812 }, 00:30:15.812 { 00:30:15.812 "method": "bdev_nvme_set_hotplug", 00:30:15.812 "params": { 00:30:15.812 "period_us": 100000, 00:30:15.812 "enable": false 00:30:15.812 } 00:30:15.812 }, 00:30:15.812 { 00:30:15.812 "method": "bdev_wait_for_examine" 00:30:15.812 } 00:30:15.812 ] 00:30:15.812 }, 00:30:15.812 { 00:30:15.812 "subsystem": "nbd", 00:30:15.812 "config": [] 00:30:15.812 } 00:30:15.812 ] 00:30:15.812 }' 00:30:15.812 01:35:41 keyring_file -- keyring/file.sh@114 -- # killprocess 3584583 00:30:15.812 01:35:41 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3584583 ']' 00:30:15.812 01:35:41 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3584583 00:30:15.812 01:35:41 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:15.812 01:35:41 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:15.812 01:35:41 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3584583 00:30:15.812 01:35:41 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:15.812 01:35:41 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:15.812 01:35:41 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3584583' 00:30:15.812 killing process with pid 3584583 00:30:15.812 01:35:41 keyring_file -- common/autotest_common.sh@967 -- # kill 3584583 00:30:15.812 Received shutdown signal, test time was about 1.000000 seconds 00:30:15.812 00:30:15.812 Latency(us) 00:30:15.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.812 =================================================================================================================== 00:30:15.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:15.812 01:35:41 keyring_file -- common/autotest_common.sh@972 -- # wait 3584583 00:30:16.070 01:35:41 keyring_file -- keyring/file.sh@117 -- # bperfpid=3586096 00:30:16.070 01:35:41 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3586096 /var/tmp/bperf.sock 00:30:16.070 01:35:41 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3586096 ']' 00:30:16.070 01:35:41 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:16.070 01:35:41 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:16.070 01:35:41 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:16.070 01:35:41 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:16.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:16.070 01:35:41 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:16.070 "subsystems": [ 00:30:16.070 { 00:30:16.070 "subsystem": "keyring", 00:30:16.070 "config": [ 00:30:16.070 { 00:30:16.070 "method": "keyring_file_add_key", 00:30:16.070 "params": { 00:30:16.070 "name": "key0", 00:30:16.070 "path": "/tmp/tmp.Tvg2jEvqvF" 00:30:16.070 } 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "method": "keyring_file_add_key", 00:30:16.070 "params": { 00:30:16.070 "name": "key1", 00:30:16.070 "path": "/tmp/tmp.vbpIwf6XJm" 00:30:16.070 } 00:30:16.070 } 00:30:16.070 ] 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "subsystem": "iobuf", 00:30:16.070 "config": [ 00:30:16.070 { 00:30:16.070 "method": "iobuf_set_options", 00:30:16.070 "params": { 00:30:16.070 "small_pool_count": 8192, 00:30:16.070 "large_pool_count": 1024, 00:30:16.070 "small_bufsize": 8192, 00:30:16.070 "large_bufsize": 135168 00:30:16.070 } 00:30:16.070 } 00:30:16.070 ] 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "subsystem": "sock", 00:30:16.070 "config": [ 00:30:16.070 { 00:30:16.070 "method": "sock_set_default_impl", 00:30:16.070 "params": { 00:30:16.070 "impl_name": "posix" 00:30:16.070 } 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "method": "sock_impl_set_options", 00:30:16.070 "params": { 00:30:16.070 "impl_name": "ssl", 00:30:16.070 "recv_buf_size": 4096, 00:30:16.070 "send_buf_size": 4096, 00:30:16.070 "enable_recv_pipe": true, 00:30:16.070 "enable_quickack": false, 00:30:16.070 "enable_placement_id": 0, 00:30:16.070 "enable_zerocopy_send_server": true, 00:30:16.070 "enable_zerocopy_send_client": false, 00:30:16.070 "zerocopy_threshold": 0, 00:30:16.070 "tls_version": 0, 00:30:16.070 "enable_ktls": false 00:30:16.070 } 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "method": "sock_impl_set_options", 00:30:16.070 "params": { 00:30:16.070 "impl_name": "posix", 00:30:16.070 "recv_buf_size": 2097152, 00:30:16.070 "send_buf_size": 2097152, 00:30:16.070 "enable_recv_pipe": true, 00:30:16.070 "enable_quickack": false, 00:30:16.070 "enable_placement_id": 0, 00:30:16.070 "enable_zerocopy_send_server": true, 00:30:16.070 "enable_zerocopy_send_client": false, 00:30:16.070 "zerocopy_threshold": 0, 00:30:16.070 "tls_version": 0, 00:30:16.070 "enable_ktls": false 00:30:16.070 } 00:30:16.070 } 00:30:16.070 ] 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "subsystem": "vmd", 00:30:16.070 "config": [] 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "subsystem": "accel", 00:30:16.070 "config": [ 00:30:16.070 { 00:30:16.070 "method": "accel_set_options", 00:30:16.070 "params": { 00:30:16.070 "small_cache_size": 128, 00:30:16.070 "large_cache_size": 16, 00:30:16.070 "task_count": 2048, 00:30:16.070 "sequence_count": 2048, 00:30:16.070 "buf_count": 2048 00:30:16.070 } 00:30:16.070 } 00:30:16.070 ] 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "subsystem": "bdev", 00:30:16.070 "config": [ 00:30:16.070 { 00:30:16.070 "method": "bdev_set_options", 00:30:16.070 "params": { 00:30:16.070 "bdev_io_pool_size": 65535, 00:30:16.070 "bdev_io_cache_size": 256, 00:30:16.070 "bdev_auto_examine": true, 00:30:16.070 "iobuf_small_cache_size": 128, 00:30:16.070 "iobuf_large_cache_size": 16 00:30:16.070 } 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "method": "bdev_raid_set_options", 00:30:16.070 "params": { 00:30:16.070 "process_window_size_kb": 1024 00:30:16.070 } 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "method": "bdev_iscsi_set_options", 00:30:16.070 "params": { 00:30:16.070 "timeout_sec": 30 00:30:16.070 } 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "method": "bdev_nvme_set_options", 00:30:16.070 "params": { 00:30:16.070 "action_on_timeout": "none", 00:30:16.070 "timeout_us": 0, 00:30:16.070 "timeout_admin_us": 0, 00:30:16.070 "keep_alive_timeout_ms": 10000, 00:30:16.070 "arbitration_burst": 0, 00:30:16.070 "low_priority_weight": 0, 00:30:16.070 "medium_priority_weight": 0, 00:30:16.070 "high_priority_weight": 0, 00:30:16.070 "nvme_adminq_poll_period_us": 10000, 00:30:16.070 "nvme_ioq_poll_period_us": 0, 00:30:16.070 "io_queue_requests": 512, 00:30:16.070 "delay_cmd_submit": true, 00:30:16.070 "transport_retry_count": 4, 00:30:16.070 "bdev_retry_count": 3, 00:30:16.070 "transport_ack_timeout": 0, 00:30:16.070 "ctrlr_loss_timeout_sec": 0, 00:30:16.070 "reconnect_delay_sec": 0, 00:30:16.070 "fast_io_fail_timeout_sec": 0, 00:30:16.070 "disable_auto_failback": false, 00:30:16.070 "generate_uuids": false, 00:30:16.070 "transport_tos": 0, 00:30:16.070 "nvme_error_stat": false, 00:30:16.070 "rdma_srq_size": 0, 00:30:16.070 "io_path_stat": false, 00:30:16.070 "allow_accel_sequence": false, 00:30:16.070 "rdma_max_cq_size": 0, 00:30:16.070 "rdma_cm_event_timeout_ms": 0, 00:30:16.070 "dhchap_digests": [ 00:30:16.070 "sha256", 00:30:16.070 "sha384", 00:30:16.070 "sha512" 00:30:16.070 ], 00:30:16.070 "dhchap_dhgroups": [ 00:30:16.070 "null", 00:30:16.070 "ffdhe2048", 00:30:16.070 "ffdhe3072", 00:30:16.070 "ffdhe4096", 00:30:16.070 "ffdhe6144", 00:30:16.070 "ffdhe8192" 00:30:16.070 ] 00:30:16.070 } 00:30:16.070 }, 00:30:16.070 { 00:30:16.070 "method": "bdev_nvme_attach_controller", 00:30:16.070 "params": { 00:30:16.070 "name": "nvme0", 00:30:16.070 "trtype": "TCP", 00:30:16.070 "adrfam": "IPv4", 00:30:16.070 "traddr": "127.0.0.1", 00:30:16.070 "trsvcid": "4420", 00:30:16.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:16.070 "prchk_reftag": false, 00:30:16.070 "prchk_guard": false, 00:30:16.071 "ctrlr_loss_timeout_sec": 0, 00:30:16.071 "reconnect_delay_sec": 0, 00:30:16.071 "fast_io_fail_timeout_sec": 0, 00:30:16.071 "psk": "key0", 00:30:16.071 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:16.071 "hdgst": false, 00:30:16.071 "ddgst": false 00:30:16.071 } 00:30:16.071 }, 00:30:16.071 { 00:30:16.071 "method": "bdev_nvme_set_hotplug", 00:30:16.071 "params": { 00:30:16.071 "period_us": 100000, 00:30:16.071 "enable": false 00:30:16.071 } 00:30:16.071 }, 00:30:16.071 { 00:30:16.071 "method": "bdev_wait_for_examine" 00:30:16.071 } 00:30:16.071 ] 00:30:16.071 }, 00:30:16.071 { 00:30:16.071 "subsystem": "nbd", 00:30:16.071 "config": [] 00:30:16.071 } 00:30:16.071 ] 00:30:16.071 }' 00:30:16.071 01:35:41 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:16.071 01:35:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:16.071 [2024-07-16 01:35:41.897326] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:30:16.071 [2024-07-16 01:35:41.897383] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586096 ] 00:30:16.071 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.071 [2024-07-16 01:35:41.951476] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.071 [2024-07-16 01:35:42.029596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.328 [2024-07-16 01:35:42.186952] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:16.892 01:35:42 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:16.892 01:35:42 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:16.892 01:35:42 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:16.892 01:35:42 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:16.892 01:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.892 01:35:42 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:16.892 01:35:42 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:16.892 01:35:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:16.892 01:35:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.892 01:35:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.892 01:35:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:16.892 01:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:17.148 01:35:43 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:17.148 01:35:43 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:17.148 01:35:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:17.148 01:35:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:17.148 01:35:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:17.148 01:35:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:17.148 01:35:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:17.403 01:35:43 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:17.403 01:35:43 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:17.403 01:35:43 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:17.403 01:35:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:17.403 01:35:43 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:17.403 01:35:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:17.403 01:35:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Tvg2jEvqvF /tmp/tmp.vbpIwf6XJm 00:30:17.403 01:35:43 keyring_file -- keyring/file.sh@20 -- # killprocess 3586096 00:30:17.403 01:35:43 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3586096 ']' 00:30:17.403 01:35:43 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3586096 00:30:17.403 01:35:43 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:17.403 01:35:43 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:17.403 01:35:43 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3586096 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3586096' 00:30:17.659 killing process with pid 3586096 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@967 -- # kill 3586096 00:30:17.659 Received shutdown signal, test time was about 1.000000 seconds 00:30:17.659 00:30:17.659 Latency(us) 00:30:17.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.659 =================================================================================================================== 00:30:17.659 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@972 -- # wait 3586096 00:30:17.659 01:35:43 keyring_file -- keyring/file.sh@21 -- # killprocess 3584544 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3584544 ']' 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3584544 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3584544 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:17.659 01:35:43 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3584544' 00:30:17.659 killing process with pid 3584544 00:30:17.915 01:35:43 keyring_file -- common/autotest_common.sh@967 -- # kill 3584544 00:30:17.915 [2024-07-16 01:35:43.646719] app.c:1029:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:17.915 01:35:43 keyring_file -- common/autotest_common.sh@972 -- # wait 3584544 00:30:18.171 00:30:18.171 real 0m11.791s 00:30:18.171 user 0m28.240s 00:30:18.171 sys 0m2.635s 00:30:18.171 01:35:43 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:18.171 01:35:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:18.171 ************************************ 00:30:18.171 END TEST keyring_file 00:30:18.171 ************************************ 00:30:18.171 01:35:43 -- common/autotest_common.sh@1142 -- # return 0 00:30:18.171 01:35:43 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:18.171 01:35:43 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:18.171 01:35:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:18.171 01:35:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:18.171 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:30:18.171 ************************************ 00:30:18.171 START TEST keyring_linux 00:30:18.171 ************************************ 00:30:18.171 01:35:44 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:18.171 * Looking for test storage... 00:30:18.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:18.171 01:35:44 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:18.171 01:35:44 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.171 01:35:44 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.171 01:35:44 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.171 01:35:44 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.171 01:35:44 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.171 01:35:44 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.171 01:35:44 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.171 01:35:44 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:18.171 01:35:44 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:18.171 01:35:44 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:18.171 01:35:44 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:18.171 01:35:44 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:18.171 01:35:44 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:18.171 01:35:44 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:18.171 01:35:44 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:18.171 01:35:44 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:18.171 01:35:44 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:18.172 01:35:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:18.172 01:35:44 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:18.172 01:35:44 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:18.172 01:35:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:18.172 01:35:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:18.172 01:35:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:18.172 01:35:44 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:18.172 01:35:44 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:18.172 01:35:44 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:18.172 01:35:44 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:18.172 01:35:44 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:18.172 01:35:44 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:18.428 01:35:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:18.428 01:35:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:18.428 /tmp/:spdk-test:key0 00:30:18.428 01:35:44 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:18.428 01:35:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:18.428 01:35:44 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:18.428 01:35:44 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:18.428 01:35:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:18.428 01:35:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:18.428 01:35:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:18.428 01:35:44 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:18.429 01:35:44 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:18.429 01:35:44 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:18.429 01:35:44 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:18.429 01:35:44 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:18.429 01:35:44 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:18.429 01:35:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:18.429 01:35:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:18.429 /tmp/:spdk-test:key1 00:30:18.429 01:35:44 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3586642 00:30:18.429 01:35:44 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3586642 00:30:18.429 01:35:44 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:18.429 01:35:44 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3586642 ']' 00:30:18.429 01:35:44 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.429 01:35:44 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:18.429 01:35:44 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.429 01:35:44 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:18.429 01:35:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:18.429 [2024-07-16 01:35:44.262900] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:30:18.429 [2024-07-16 01:35:44.262954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586642 ] 00:30:18.429 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.429 [2024-07-16 01:35:44.316259] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.429 [2024-07-16 01:35:44.393420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.356 01:35:45 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:19.356 01:35:45 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:19.356 01:35:45 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:19.356 01:35:45 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.356 01:35:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:19.356 [2024-07-16 01:35:45.038308] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.356 null0 00:30:19.356 [2024-07-16 01:35:45.070373] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:19.356 [2024-07-16 01:35:45.070687] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:19.356 01:35:45 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.356 01:35:45 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:19.356 371408675 00:30:19.356 01:35:45 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:19.356 693571050 00:30:19.356 01:35:45 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:19.356 01:35:45 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3586671 00:30:19.356 01:35:45 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3586671 /var/tmp/bperf.sock 00:30:19.356 01:35:45 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3586671 ']' 00:30:19.356 01:35:45 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:19.356 01:35:45 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:19.356 01:35:45 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:19.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:19.356 01:35:45 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:19.356 01:35:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:19.356 [2024-07-16 01:35:45.126562] Starting SPDK v24.09-pre git sha1 315cf04b6 / DPDK 24.03.0 initialization... 00:30:19.356 [2024-07-16 01:35:45.126604] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3586671 ] 00:30:19.356 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.356 [2024-07-16 01:35:45.181954] app.c: 914:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.356 [2024-07-16 01:35:45.260304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.284 01:35:45 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:20.284 01:35:45 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:20.284 01:35:45 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:20.284 01:35:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:20.284 01:35:46 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:20.284 01:35:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:20.540 01:35:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:20.540 01:35:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:20.540 [2024-07-16 01:35:46.479010] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:20.798 nvme0n1 00:30:20.798 01:35:46 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:20.798 01:35:46 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:20.798 01:35:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:20.798 01:35:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:20.798 01:35:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:20.798 01:35:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:20.798 01:35:46 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:20.798 01:35:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:20.798 01:35:46 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:20.798 01:35:46 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:20.798 01:35:46 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:20.798 01:35:46 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:20.798 01:35:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.055 01:35:46 keyring_linux -- keyring/linux.sh@25 -- # sn=371408675 00:30:21.055 01:35:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:21.055 01:35:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:21.055 01:35:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 371408675 == \3\7\1\4\0\8\6\7\5 ]] 00:30:21.055 01:35:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 371408675 00:30:21.055 01:35:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:21.055 01:35:46 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:21.055 Running I/O for 1 seconds... 00:30:22.427 00:30:22.427 Latency(us) 00:30:22.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.427 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:22.427 nvme0n1 : 1.01 19702.39 76.96 0.00 0.00 6471.56 5242.88 14792.41 00:30:22.427 =================================================================================================================== 00:30:22.427 Total : 19702.39 76.96 0.00 0.00 6471.56 5242.88 14792.41 00:30:22.427 0 00:30:22.427 01:35:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:22.427 01:35:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:22.427 01:35:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:22.427 01:35:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:22.427 01:35:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:22.427 01:35:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:22.427 01:35:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.427 01:35:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:22.427 01:35:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:22.427 01:35:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:22.427 01:35:48 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:22.427 01:35:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:22.427 01:35:48 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:30:22.427 01:35:48 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:22.427 01:35:48 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:22.427 01:35:48 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:22.427 01:35:48 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:22.427 01:35:48 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:22.427 01:35:48 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:22.427 01:35:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:22.684 [2024-07-16 01:35:48.540391] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:22.684 [2024-07-16 01:35:48.541005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de5e20 (107): Transport endpoint is not connected 00:30:22.684 [2024-07-16 01:35:48.542000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de5e20 (9): Bad file descriptor 00:30:22.684 [2024-07-16 01:35:48.543001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:22.685 [2024-07-16 01:35:48.543010] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:22.685 [2024-07-16 01:35:48.543017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:22.685 request: 00:30:22.685 { 00:30:22.685 "name": "nvme0", 00:30:22.685 "trtype": "tcp", 00:30:22.685 "traddr": "127.0.0.1", 00:30:22.685 "adrfam": "ipv4", 00:30:22.685 "trsvcid": "4420", 00:30:22.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:22.685 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:22.685 "prchk_reftag": false, 00:30:22.685 "prchk_guard": false, 00:30:22.685 "hdgst": false, 00:30:22.685 "ddgst": false, 00:30:22.685 "psk": ":spdk-test:key1", 00:30:22.685 "method": "bdev_nvme_attach_controller", 00:30:22.685 "req_id": 1 00:30:22.685 } 00:30:22.685 Got JSON-RPC error response 00:30:22.685 response: 00:30:22.685 { 00:30:22.685 "code": -5, 00:30:22.685 "message": "Input/output error" 00:30:22.685 } 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@33 -- # sn=371408675 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 371408675 00:30:22.685 1 links removed 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@33 -- # sn=693571050 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 693571050 00:30:22.685 1 links removed 00:30:22.685 01:35:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3586671 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3586671 ']' 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3586671 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3586671 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3586671' 00:30:22.685 killing process with pid 3586671 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@967 -- # kill 3586671 00:30:22.685 Received shutdown signal, test time was about 1.000000 seconds 00:30:22.685 00:30:22.685 Latency(us) 00:30:22.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.685 =================================================================================================================== 00:30:22.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.685 01:35:48 keyring_linux -- common/autotest_common.sh@972 -- # wait 3586671 00:30:22.942 01:35:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3586642 00:30:22.942 01:35:48 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3586642 ']' 00:30:22.942 01:35:48 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3586642 00:30:22.942 01:35:48 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:22.942 01:35:48 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:22.942 01:35:48 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3586642 00:30:22.942 01:35:48 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:22.942 01:35:48 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:22.942 01:35:48 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3586642' 00:30:22.942 killing process with pid 3586642 00:30:22.942 01:35:48 keyring_linux -- common/autotest_common.sh@967 -- # kill 3586642 00:30:22.942 01:35:48 keyring_linux -- common/autotest_common.sh@972 -- # wait 3586642 00:30:23.200 00:30:23.200 real 0m5.118s 00:30:23.200 user 0m9.382s 00:30:23.200 sys 0m1.405s 00:30:23.200 01:35:49 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:23.200 01:35:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:23.200 ************************************ 00:30:23.200 END TEST keyring_linux 00:30:23.200 ************************************ 00:30:23.200 01:35:49 -- common/autotest_common.sh@1142 -- # return 0 00:30:23.200 01:35:49 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:23.200 01:35:49 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:23.200 01:35:49 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:23.200 01:35:49 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:23.200 01:35:49 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:23.200 01:35:49 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:23.200 01:35:49 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:23.200 01:35:49 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:23.200 01:35:49 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:23.200 01:35:49 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:23.200 01:35:49 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:23.200 01:35:49 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:23.200 01:35:49 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:23.200 01:35:49 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:23.200 01:35:49 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:23.200 01:35:49 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:23.200 01:35:49 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:23.200 01:35:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:23.200 01:35:49 -- common/autotest_common.sh@10 -- # set +x 00:30:23.200 01:35:49 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:23.200 01:35:49 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:23.200 01:35:49 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:23.200 01:35:49 -- common/autotest_common.sh@10 -- # set +x 00:30:28.508 INFO: APP EXITING 00:30:28.508 INFO: killing all VMs 00:30:28.508 INFO: killing vhost app 00:30:28.508 INFO: EXIT DONE 00:30:30.404 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:30:30.662 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:30.662 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:30.662 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:30.662 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:30.662 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:30.662 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:30.662 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:30.662 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:30.662 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:30.662 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:30.662 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:30.662 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:30.919 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:30.919 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:30.919 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:30.919 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:33.445 Cleaning 00:30:33.445 Removing: /var/run/dpdk/spdk0/config 00:30:33.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:33.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:33.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:33.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:33.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:33.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:33.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:33.445 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:33.445 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:33.445 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:33.445 Removing: /var/run/dpdk/spdk1/config 00:30:33.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:33.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:33.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:33.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:33.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:33.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:33.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:33.445 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:33.445 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:33.445 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:33.445 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:33.445 Removing: /var/run/dpdk/spdk2/config 00:30:33.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:33.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:33.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:33.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:33.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:33.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:33.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:33.445 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:33.445 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:33.445 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:33.445 Removing: /var/run/dpdk/spdk3/config 00:30:33.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:33.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:33.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:33.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:33.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:33.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:33.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:33.445 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:33.445 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:33.445 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:33.445 Removing: /var/run/dpdk/spdk4/config 00:30:33.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:33.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:33.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:33.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:33.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:33.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:33.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:33.445 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:33.445 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:33.445 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:33.445 Removing: /dev/shm/bdev_svc_trace.1 00:30:33.445 Removing: /dev/shm/nvmf_trace.0 00:30:33.445 Removing: /dev/shm/spdk_tgt_trace.pid3202655 00:30:33.445 Removing: /var/run/dpdk/spdk0 00:30:33.445 Removing: /var/run/dpdk/spdk1 00:30:33.445 Removing: /var/run/dpdk/spdk2 00:30:33.445 Removing: /var/run/dpdk/spdk3 00:30:33.445 Removing: /var/run/dpdk/spdk4 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3200280 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3201363 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3202655 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3203288 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3204229 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3204475 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3205442 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3205470 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3205802 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3207435 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3208799 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3209081 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3209371 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3209670 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3209964 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3210216 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3210466 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3210744 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3211490 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3214475 00:30:33.445 Removing: /var/run/dpdk/spdk_pid3214735 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3215089 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3215231 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3215718 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3215872 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3216222 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3216449 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3216714 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3216835 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3216990 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3217219 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3217700 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3217909 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3218209 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3218477 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3218608 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3218673 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3218925 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3219173 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3219426 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3219671 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3219920 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3220172 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3220419 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3220668 00:30:33.702 Removing: /var/run/dpdk/spdk_pid3220924 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3221170 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3221417 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3221668 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3221918 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3222166 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3222417 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3222667 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3222925 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3223178 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3223426 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3223687 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3223951 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3224266 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3228027 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3271762 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3275999 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3286314 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3291708 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3295691 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3296348 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3302380 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3308395 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3308399 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3309312 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3310160 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3310933 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3311617 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3311621 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3311856 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3311923 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3312074 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3312842 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3313702 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3314616 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3315129 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3315304 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3315534 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3316555 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3317749 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3326580 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3326840 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3330870 00:30:33.703 Removing: /var/run/dpdk/spdk_pid3336731 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3339328 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3349515 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3358521 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3360230 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3361151 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3378257 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3382016 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3406442 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3410881 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3413013 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3414906 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3415076 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3415217 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3415401 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3416133 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3417971 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3418963 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3419455 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3421595 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3422280 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3423007 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3427211 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3437083 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3441018 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3446995 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3448297 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3449842 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3454634 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3458658 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3466216 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3466233 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3470727 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3470959 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3471184 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3471634 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3471646 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3476123 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3476691 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3481029 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3483778 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3489191 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3494539 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3503368 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3510559 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3510561 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3528605 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3529105 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3529801 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3530497 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3531347 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3531941 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3532640 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3533339 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3537588 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3537822 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3543661 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3543933 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3546159 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3554399 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3554404 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3559643 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3561504 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3563383 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3564646 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3566619 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3567795 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3576422 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3576892 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3577543 00:30:33.960 Removing: /var/run/dpdk/spdk_pid3579820 00:30:34.217 Removing: /var/run/dpdk/spdk_pid3580285 00:30:34.217 Removing: /var/run/dpdk/spdk_pid3580753 00:30:34.217 Removing: /var/run/dpdk/spdk_pid3584544 00:30:34.217 Removing: /var/run/dpdk/spdk_pid3584583 00:30:34.217 Removing: /var/run/dpdk/spdk_pid3586096 00:30:34.217 Removing: /var/run/dpdk/spdk_pid3586642 00:30:34.217 Removing: /var/run/dpdk/spdk_pid3586671 00:30:34.217 Clean 00:30:34.217 01:36:00 -- common/autotest_common.sh@1451 -- # return 0 00:30:34.217 01:36:00 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:34.217 01:36:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:34.217 01:36:00 -- common/autotest_common.sh@10 -- # set +x 00:30:34.217 01:36:00 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:34.217 01:36:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:34.217 01:36:00 -- common/autotest_common.sh@10 -- # set +x 00:30:34.217 01:36:00 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:34.217 01:36:00 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:34.217 01:36:00 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:34.217 01:36:00 -- spdk/autotest.sh@391 -- # hash lcov 00:30:34.217 01:36:00 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:34.217 01:36:00 -- spdk/autotest.sh@393 -- # hostname 00:30:34.217 01:36:00 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:34.475 geninfo: WARNING: invalid characters removed from testname! 00:30:56.429 01:36:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:56.686 01:36:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:58.587 01:36:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:00.507 01:36:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:01.885 01:36:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:03.788 01:36:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:05.689 01:36:31 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:05.689 01:36:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.689 01:36:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:05.689 01:36:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.689 01:36:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.689 01:36:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.689 01:36:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.689 01:36:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.689 01:36:31 -- paths/export.sh@5 -- $ export PATH 00:31:05.689 01:36:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.689 01:36:31 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:05.689 01:36:31 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:05.689 01:36:31 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721086591.XXXXXX 00:31:05.689 01:36:31 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721086591.1rTEkK 00:31:05.689 01:36:31 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:05.689 01:36:31 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:31:05.689 01:36:31 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:05.689 01:36:31 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:05.689 01:36:31 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:05.689 01:36:31 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:05.689 01:36:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:05.689 01:36:31 -- common/autotest_common.sh@10 -- $ set +x 00:31:05.689 01:36:31 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:05.689 01:36:31 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:05.689 01:36:31 -- pm/common@17 -- $ local monitor 00:31:05.689 01:36:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:05.689 01:36:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:05.689 01:36:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:05.689 01:36:31 -- pm/common@21 -- $ date +%s 00:31:05.689 01:36:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:05.689 01:36:31 -- pm/common@21 -- $ date +%s 00:31:05.689 01:36:31 -- pm/common@25 -- $ sleep 1 00:31:05.689 01:36:31 -- pm/common@21 -- $ date +%s 00:31:05.689 01:36:31 -- pm/common@21 -- $ date +%s 00:31:05.690 01:36:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721086591 00:31:05.690 01:36:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721086591 00:31:05.690 01:36:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721086591 00:31:05.690 01:36:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721086591 00:31:05.690 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721086591_collect-vmstat.pm.log 00:31:05.690 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721086591_collect-cpu-load.pm.log 00:31:05.690 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721086591_collect-cpu-temp.pm.log 00:31:05.690 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721086591_collect-bmc-pm.bmc.pm.log 00:31:06.624 01:36:32 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:06.624 01:36:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:06.624 01:36:32 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:06.624 01:36:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:06.624 01:36:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:06.624 01:36:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:06.624 01:36:32 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:06.624 01:36:32 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:06.624 01:36:32 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:06.624 01:36:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:06.624 01:36:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:06.624 01:36:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:06.624 01:36:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:06.624 01:36:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:06.625 01:36:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:06.625 01:36:32 -- pm/common@44 -- $ pid=3597351 00:31:06.625 01:36:32 -- pm/common@50 -- $ kill -TERM 3597351 00:31:06.625 01:36:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:06.625 01:36:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:06.625 01:36:32 -- pm/common@44 -- $ pid=3597352 00:31:06.625 01:36:32 -- pm/common@50 -- $ kill -TERM 3597352 00:31:06.625 01:36:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:06.625 01:36:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:06.625 01:36:32 -- pm/common@44 -- $ pid=3597354 00:31:06.625 01:36:32 -- pm/common@50 -- $ kill -TERM 3597354 00:31:06.625 01:36:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:06.625 01:36:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:06.625 01:36:32 -- pm/common@44 -- $ pid=3597378 00:31:06.625 01:36:32 -- pm/common@50 -- $ sudo -E kill -TERM 3597378 00:31:06.625 + [[ -n 3096944 ]] 00:31:06.625 + sudo kill 3096944 00:31:06.892 [Pipeline] } 00:31:06.910 [Pipeline] // stage 00:31:06.915 [Pipeline] } 00:31:06.932 [Pipeline] // timeout 00:31:06.937 [Pipeline] } 00:31:06.954 [Pipeline] // catchError 00:31:06.959 [Pipeline] } 00:31:06.977 [Pipeline] // wrap 00:31:06.982 [Pipeline] } 00:31:06.998 [Pipeline] // catchError 00:31:07.006 [Pipeline] stage 00:31:07.008 [Pipeline] { (Epilogue) 00:31:07.022 [Pipeline] catchError 00:31:07.023 [Pipeline] { 00:31:07.036 [Pipeline] echo 00:31:07.038 Cleanup processes 00:31:07.044 [Pipeline] sh 00:31:07.413 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:07.413 3597478 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:07.413 3597755 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:07.425 [Pipeline] sh 00:31:07.704 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:07.704 ++ grep -v 'sudo pgrep' 00:31:07.704 ++ awk '{print $1}' 00:31:07.704 + sudo kill -9 3597478 00:31:07.715 [Pipeline] sh 00:31:07.994 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:17.968 [Pipeline] sh 00:31:18.249 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:18.249 Artifacts sizes are good 00:31:18.263 [Pipeline] archiveArtifacts 00:31:18.269 Archiving artifacts 00:31:18.421 [Pipeline] sh 00:31:18.727 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:18.740 [Pipeline] cleanWs 00:31:18.748 [WS-CLEANUP] Deleting project workspace... 00:31:18.748 [WS-CLEANUP] Deferred wipeout is used... 00:31:18.753 [WS-CLEANUP] done 00:31:18.755 [Pipeline] } 00:31:18.770 [Pipeline] // catchError 00:31:18.779 [Pipeline] sh 00:31:19.051 + logger -p user.info -t JENKINS-CI 00:31:19.059 [Pipeline] } 00:31:19.077 [Pipeline] // stage 00:31:19.083 [Pipeline] } 00:31:19.100 [Pipeline] // node 00:31:19.106 [Pipeline] End of Pipeline 00:31:19.160 Finished: SUCCESS